00:00:00.000 Started by upstream project "autotest-per-patch" build number 132765 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.049 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.050 The recommended git tool is: git 00:00:00.050 using credential 00000000-0000-0000-0000-000000000002 00:00:00.052 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.087 Fetching changes from the remote Git repository 00:00:00.091 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.124 Using shallow fetch with depth 1 00:00:00.124 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.124 > git --version # timeout=10 00:00:00.202 > git --version # 'git version 2.39.2' 00:00:00.202 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.222 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.222 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.778 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.794 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.808 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.808 > git config core.sparsecheckout # timeout=10 00:00:05.822 > git read-tree -mu HEAD # timeout=10 00:00:05.841 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.868 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.868 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.981 [Pipeline] Start of Pipeline 00:00:05.996 [Pipeline] library 00:00:05.998 Loading library shm_lib@master 00:00:05.998 Library shm_lib@master is cached. Copying from home. 00:00:06.019 [Pipeline] node 00:00:06.030 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.032 [Pipeline] { 00:00:06.042 [Pipeline] catchError 00:00:06.043 [Pipeline] { 00:00:06.056 [Pipeline] wrap 00:00:06.064 [Pipeline] { 00:00:06.071 [Pipeline] stage 00:00:06.072 [Pipeline] { (Prologue) 00:00:06.332 [Pipeline] sh 00:00:06.661 + logger -p user.info -t JENKINS-CI 00:00:06.682 [Pipeline] echo 00:00:06.684 Node: GP8 00:00:06.693 [Pipeline] sh 00:00:07.000 [Pipeline] setCustomBuildProperty 00:00:07.009 [Pipeline] echo 00:00:07.011 Cleanup processes 00:00:07.016 [Pipeline] sh 00:00:07.300 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.300 864407 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.314 [Pipeline] sh 00:00:07.600 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.600 ++ awk '{print $1}' 00:00:07.600 ++ grep -v 'sudo pgrep' 00:00:07.600 + sudo kill -9 00:00:07.600 + true 00:00:07.612 [Pipeline] cleanWs 00:00:07.623 [WS-CLEANUP] Deleting project workspace... 00:00:07.623 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.630 [WS-CLEANUP] done 00:00:07.633 [Pipeline] setCustomBuildProperty 00:00:07.642 [Pipeline] sh 00:00:07.923 + sudo git config --global --replace-all safe.directory '*' 00:00:08.016 [Pipeline] httpRequest 00:00:08.425 [Pipeline] echo 00:00:08.427 Sorcerer 10.211.164.20 is alive 00:00:08.438 [Pipeline] retry 00:00:08.440 [Pipeline] { 00:00:08.458 [Pipeline] httpRequest 00:00:08.463 HttpMethod: GET 00:00:08.464 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.465 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.494 Response Code: HTTP/1.1 200 OK 00:00:08.494 Success: Status code 200 is in the accepted range: 200,404 00:00:08.494 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:27.698 [Pipeline] } 00:00:27.716 [Pipeline] // retry 00:00:27.723 [Pipeline] sh 00:00:28.012 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:28.032 [Pipeline] httpRequest 00:00:28.407 [Pipeline] echo 00:00:28.409 Sorcerer 10.211.164.20 is alive 00:00:28.421 [Pipeline] retry 00:00:28.424 [Pipeline] { 00:00:28.442 [Pipeline] httpRequest 00:00:28.448 HttpMethod: GET 00:00:28.448 URL: http://10.211.164.20/packages/spdk_c0f3f2d189d24d1da9524a2e485cd9aa1e003d81.tar.gz 00:00:28.449 Sending request to url: http://10.211.164.20/packages/spdk_c0f3f2d189d24d1da9524a2e485cd9aa1e003d81.tar.gz 00:00:28.457 Response Code: HTTP/1.1 200 OK 00:00:28.457 Success: Status code 200 is in the accepted range: 200,404 00:00:28.457 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c0f3f2d189d24d1da9524a2e485cd9aa1e003d81.tar.gz 00:03:38.932 [Pipeline] } 00:03:38.948 [Pipeline] // retry 00:03:38.957 [Pipeline] sh 00:03:39.248 + tar --no-same-owner -xf spdk_c0f3f2d189d24d1da9524a2e485cd9aa1e003d81.tar.gz 00:03:41.796 [Pipeline] sh 00:03:42.093 + git -C spdk log --oneline -n5 00:03:42.093 c0f3f2d18 lib/reduce: Support storing metadata on backing dev. (2 of 5, data r/w with async metadata) 00:03:42.093 7ab149b9a lib/reduce: Support storing metadata on backing dev. (1 of 5, struct define and init process) 00:03:42.093 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:03:42.093 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:03:42.093 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:03:42.106 [Pipeline] } 00:03:42.123 [Pipeline] // stage 00:03:42.132 [Pipeline] stage 00:03:42.135 [Pipeline] { (Prepare) 00:03:42.152 [Pipeline] writeFile 00:03:42.168 [Pipeline] sh 00:03:42.456 + logger -p user.info -t JENKINS-CI 00:03:42.470 [Pipeline] sh 00:03:42.757 + logger -p user.info -t JENKINS-CI 00:03:42.772 [Pipeline] sh 00:03:43.060 + cat autorun-spdk.conf 00:03:43.060 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:43.060 SPDK_TEST_NVMF=1 00:03:43.060 SPDK_TEST_NVME_CLI=1 00:03:43.060 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:43.061 SPDK_TEST_NVMF_NICS=e810 00:03:43.061 SPDK_TEST_VFIOUSER=1 00:03:43.061 SPDK_RUN_UBSAN=1 00:03:43.061 NET_TYPE=phy 00:03:43.069 RUN_NIGHTLY=0 00:03:43.075 [Pipeline] readFile 00:03:43.103 [Pipeline] withEnv 00:03:43.105 [Pipeline] { 00:03:43.119 [Pipeline] sh 00:03:43.409 + set -ex 00:03:43.409 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:43.409 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:43.409 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:43.409 ++ SPDK_TEST_NVMF=1 00:03:43.409 ++ SPDK_TEST_NVME_CLI=1 00:03:43.409 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:43.409 ++ SPDK_TEST_NVMF_NICS=e810 00:03:43.409 ++ SPDK_TEST_VFIOUSER=1 00:03:43.409 ++ SPDK_RUN_UBSAN=1 00:03:43.409 ++ NET_TYPE=phy 00:03:43.409 ++ RUN_NIGHTLY=0 00:03:43.409 + case $SPDK_TEST_NVMF_NICS in 00:03:43.409 + DRIVERS=ice 00:03:43.409 + [[ tcp == \r\d\m\a ]] 00:03:43.409 + [[ -n ice ]] 00:03:43.409 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:43.409 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:43.409 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:43.409 rmmod: ERROR: Module irdma is not currently loaded 00:03:43.409 rmmod: ERROR: Module i40iw is not currently loaded 00:03:43.409 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:43.409 + true 00:03:43.409 + for D in $DRIVERS 00:03:43.409 + sudo modprobe ice 00:03:43.409 + exit 0 00:03:43.420 [Pipeline] } 00:03:43.436 [Pipeline] // withEnv 00:03:43.441 [Pipeline] } 00:03:43.455 [Pipeline] // stage 00:03:43.465 [Pipeline] catchError 00:03:43.467 [Pipeline] { 00:03:43.481 [Pipeline] timeout 00:03:43.482 Timeout set to expire in 1 hr 0 min 00:03:43.484 [Pipeline] { 00:03:43.498 [Pipeline] stage 00:03:43.500 [Pipeline] { (Tests) 00:03:43.515 [Pipeline] sh 00:03:43.802 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:43.802 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:43.802 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:43.802 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:43.802 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:43.802 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:43.802 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:43.802 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:43.802 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:43.802 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:43.802 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:43.802 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:43.802 + source /etc/os-release 00:03:43.802 ++ NAME='Fedora Linux' 00:03:43.802 ++ VERSION='39 (Cloud Edition)' 00:03:43.802 ++ ID=fedora 00:03:43.802 ++ VERSION_ID=39 00:03:43.802 ++ VERSION_CODENAME= 00:03:43.802 ++ PLATFORM_ID=platform:f39 00:03:43.802 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:43.802 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:43.802 ++ LOGO=fedora-logo-icon 00:03:43.802 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:43.802 ++ HOME_URL=https://fedoraproject.org/ 00:03:43.802 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:43.802 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:43.802 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:43.802 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:43.802 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:43.802 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:43.802 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:43.802 ++ SUPPORT_END=2024-11-12 00:03:43.802 ++ VARIANT='Cloud Edition' 00:03:43.802 ++ VARIANT_ID=cloud 00:03:43.802 + uname -a 00:03:43.802 Linux spdk-gp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:43.802 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:45.183 Hugepages 00:03:45.183 node hugesize free / total 00:03:45.183 node0 1048576kB 0 / 0 00:03:45.183 node0 2048kB 0 / 0 00:03:45.183 node1 1048576kB 0 / 0 00:03:45.183 node1 2048kB 0 / 0 00:03:45.183 00:03:45.183 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:45.183 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:45.183 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:45.183 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:45.183 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:45.183 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:45.183 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:45.183 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:45.183 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:45.183 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:45.183 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:45.183 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:45.183 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:45.183 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:45.183 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:45.183 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:45.183 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:45.183 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:45.183 + rm -f /tmp/spdk-ld-path 00:03:45.183 + source autorun-spdk.conf 00:03:45.183 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:45.183 ++ SPDK_TEST_NVMF=1 00:03:45.183 ++ SPDK_TEST_NVME_CLI=1 00:03:45.183 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:45.183 ++ SPDK_TEST_NVMF_NICS=e810 00:03:45.183 ++ SPDK_TEST_VFIOUSER=1 00:03:45.183 ++ SPDK_RUN_UBSAN=1 00:03:45.183 ++ NET_TYPE=phy 00:03:45.183 ++ RUN_NIGHTLY=0 00:03:45.183 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:45.183 + [[ -n '' ]] 00:03:45.183 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:45.183 + for M in /var/spdk/build-*-manifest.txt 00:03:45.183 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:45.183 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:45.183 + for M in /var/spdk/build-*-manifest.txt 00:03:45.183 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:45.183 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:45.183 + for M in /var/spdk/build-*-manifest.txt 00:03:45.183 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:45.183 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:45.183 ++ uname 00:03:45.183 + [[ Linux == \L\i\n\u\x ]] 00:03:45.183 + sudo dmesg -T 00:03:45.183 + sudo dmesg --clear 00:03:45.183 + dmesg_pid=865715 00:03:45.183 + [[ Fedora Linux == FreeBSD ]] 00:03:45.183 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:45.183 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:45.183 + sudo dmesg -Tw 00:03:45.183 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:45.183 + [[ -x /usr/src/fio-static/fio ]] 00:03:45.183 + export FIO_BIN=/usr/src/fio-static/fio 00:03:45.183 + FIO_BIN=/usr/src/fio-static/fio 00:03:45.183 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:45.183 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:45.183 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:45.183 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:45.183 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:45.183 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:45.183 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:45.183 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:45.183 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:45.183 06:07:35 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:45.183 06:07:35 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:45.183 06:07:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:45.183 06:07:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:45.183 06:07:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:45.183 06:07:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:45.183 06:07:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:03:45.183 06:07:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:03:45.183 06:07:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:03:45.183 06:07:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:03:45.183 06:07:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:45.183 06:07:35 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:45.183 06:07:35 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:45.183 06:07:35 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:45.183 06:07:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:45.183 06:07:35 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:45.183 06:07:35 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:45.183 06:07:35 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:45.183 06:07:35 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:45.183 06:07:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.183 06:07:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.183 06:07:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.183 06:07:35 -- paths/export.sh@5 -- $ export PATH 00:03:45.183 06:07:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.183 06:07:35 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:45.183 06:07:35 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:45.183 06:07:35 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733634455.XXXXXX 00:03:45.183 06:07:35 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733634455.oQQ4tz 00:03:45.183 06:07:35 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:45.183 06:07:35 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:45.183 06:07:35 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:45.183 06:07:35 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:45.183 06:07:35 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:45.183 06:07:35 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:45.183 06:07:35 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:45.183 06:07:35 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.183 06:07:35 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:45.183 06:07:35 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:45.183 06:07:35 -- pm/common@17 -- $ local monitor 00:03:45.183 06:07:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.183 06:07:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.183 06:07:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.183 06:07:35 -- pm/common@21 -- $ date +%s 00:03:45.183 06:07:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.183 06:07:35 -- pm/common@21 -- $ date +%s 00:03:45.183 06:07:35 -- pm/common@25 -- $ sleep 1 00:03:45.183 06:07:35 -- pm/common@21 -- $ date +%s 00:03:45.183 06:07:35 -- pm/common@21 -- $ date +%s 00:03:45.184 06:07:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733634455 00:03:45.184 06:07:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733634455 00:03:45.184 06:07:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733634455 00:03:45.184 06:07:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733634455 00:03:45.184 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733634455_collect-vmstat.pm.log 00:03:45.184 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733634455_collect-cpu-load.pm.log 00:03:45.184 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733634455_collect-cpu-temp.pm.log 00:03:45.184 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733634455_collect-bmc-pm.bmc.pm.log 00:03:46.120 06:07:36 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:46.120 06:07:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:46.120 06:07:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:46.120 06:07:36 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:46.120 06:07:36 -- spdk/autobuild.sh@16 -- $ date -u 00:03:46.120 Sun Dec 8 05:07:36 AM UTC 2024 00:03:46.120 06:07:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:46.120 v25.01-pre-313-gc0f3f2d18 00:03:46.120 06:07:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:46.120 06:07:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:46.120 06:07:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:46.120 06:07:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:46.120 06:07:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:46.120 06:07:36 -- common/autotest_common.sh@10 -- $ set +x 00:03:46.378 ************************************ 00:03:46.378 START TEST ubsan 00:03:46.378 ************************************ 00:03:46.378 06:07:36 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:46.378 using ubsan 00:03:46.378 00:03:46.378 real 0m0.000s 00:03:46.378 user 0m0.000s 00:03:46.378 sys 0m0.000s 00:03:46.378 06:07:36 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:46.378 06:07:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:46.378 ************************************ 00:03:46.378 END TEST ubsan 00:03:46.378 ************************************ 00:03:46.378 06:07:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:46.378 06:07:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:46.378 06:07:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:46.378 06:07:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:46.378 06:07:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:46.378 06:07:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:46.378 06:07:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:46.378 06:07:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:46.379 06:07:36 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:46.379 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:46.379 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:46.637 Using 'verbs' RDMA provider 00:03:57.226 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:07.211 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:07.211 Creating mk/config.mk...done. 00:04:07.211 Creating mk/cc.flags.mk...done. 00:04:07.211 Type 'make' to build. 00:04:07.211 06:07:57 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:04:07.211 06:07:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:07.211 06:07:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:07.211 06:07:57 -- common/autotest_common.sh@10 -- $ set +x 00:04:07.211 ************************************ 00:04:07.211 START TEST make 00:04:07.211 ************************************ 00:04:07.211 06:07:57 make -- common/autotest_common.sh@1129 -- $ make -j48 00:04:07.470 make[1]: Nothing to be done for 'all'. 00:04:09.405 The Meson build system 00:04:09.405 Version: 1.5.0 00:04:09.405 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:09.405 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:09.405 Build type: native build 00:04:09.405 Project name: libvfio-user 00:04:09.405 Project version: 0.0.1 00:04:09.405 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:09.405 C linker for the host machine: cc ld.bfd 2.40-14 00:04:09.405 Host machine cpu family: x86_64 00:04:09.405 Host machine cpu: x86_64 00:04:09.405 Run-time dependency threads found: YES 00:04:09.405 Library dl found: YES 00:04:09.405 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:09.405 Run-time dependency json-c found: YES 0.17 00:04:09.405 Run-time dependency cmocka found: YES 1.1.7 00:04:09.405 Program pytest-3 found: NO 00:04:09.405 Program flake8 found: NO 00:04:09.405 Program misspell-fixer found: NO 00:04:09.405 Program restructuredtext-lint found: NO 00:04:09.405 Program valgrind found: YES (/usr/bin/valgrind) 00:04:09.405 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:09.405 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:09.405 Compiler for C supports arguments -Wwrite-strings: YES 00:04:09.405 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:09.405 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:09.405 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:09.405 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:09.405 Build targets in project: 8 00:04:09.405 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:09.405 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:09.405 00:04:09.405 libvfio-user 0.0.1 00:04:09.405 00:04:09.405 User defined options 00:04:09.405 buildtype : debug 00:04:09.405 default_library: shared 00:04:09.405 libdir : /usr/local/lib 00:04:09.405 00:04:09.405 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:10.351 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:10.351 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:10.351 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:10.351 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:10.351 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:10.351 [5/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:10.351 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:10.351 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:10.351 [8/37] Compiling C object samples/null.p/null.c.o 00:04:10.351 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:10.351 [10/37] Compiling C object samples/server.p/server.c.o 00:04:10.351 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:10.351 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:10.351 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:10.351 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:10.351 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:10.351 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:10.351 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:10.351 [18/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:10.351 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:10.613 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:10.613 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:10.613 [22/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:10.613 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:10.613 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:10.613 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:10.613 [26/37] Compiling C object samples/client.p/client.c.o 00:04:10.613 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:10.613 [28/37] Linking target samples/client 00:04:10.613 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:04:10.613 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:10.875 [31/37] Linking target test/unit_tests 00:04:10.875 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:10.875 [33/37] Linking target samples/gpio-pci-idio-16 00:04:10.875 [34/37] Linking target samples/null 00:04:10.875 [35/37] Linking target samples/lspci 00:04:10.875 [36/37] Linking target samples/shadow_ioeventfd_server 00:04:10.875 [37/37] Linking target samples/server 00:04:10.875 INFO: autodetecting backend as ninja 00:04:10.875 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:11.138 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:11.710 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:11.710 ninja: no work to do. 00:04:16.974 The Meson build system 00:04:16.974 Version: 1.5.0 00:04:16.974 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:16.974 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:16.974 Build type: native build 00:04:16.974 Program cat found: YES (/usr/bin/cat) 00:04:16.974 Project name: DPDK 00:04:16.974 Project version: 24.03.0 00:04:16.974 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:16.974 C linker for the host machine: cc ld.bfd 2.40-14 00:04:16.974 Host machine cpu family: x86_64 00:04:16.974 Host machine cpu: x86_64 00:04:16.974 Message: ## Building in Developer Mode ## 00:04:16.974 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:16.974 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:16.974 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:16.974 Program python3 found: YES (/usr/bin/python3) 00:04:16.974 Program cat found: YES (/usr/bin/cat) 00:04:16.974 Compiler for C supports arguments -march=native: YES 00:04:16.974 Checking for size of "void *" : 8 00:04:16.974 Checking for size of "void *" : 8 (cached) 00:04:16.974 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:16.974 Library m found: YES 00:04:16.974 Library numa found: YES 00:04:16.974 Has header "numaif.h" : YES 00:04:16.974 Library fdt found: NO 00:04:16.974 Library execinfo found: NO 00:04:16.974 Has header "execinfo.h" : YES 00:04:16.974 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:16.974 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:16.974 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:16.974 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:16.974 Run-time dependency openssl found: YES 3.1.1 00:04:16.974 Run-time dependency libpcap found: YES 1.10.4 00:04:16.974 Has header "pcap.h" with dependency libpcap: YES 00:04:16.974 Compiler for C supports arguments -Wcast-qual: YES 00:04:16.974 Compiler for C supports arguments -Wdeprecated: YES 00:04:16.974 Compiler for C supports arguments -Wformat: YES 00:04:16.974 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:16.974 Compiler for C supports arguments -Wformat-security: NO 00:04:16.974 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:16.974 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:16.974 Compiler for C supports arguments -Wnested-externs: YES 00:04:16.974 Compiler for C supports arguments -Wold-style-definition: YES 00:04:16.974 Compiler for C supports arguments -Wpointer-arith: YES 00:04:16.974 Compiler for C supports arguments -Wsign-compare: YES 00:04:16.974 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:16.974 Compiler for C supports arguments -Wundef: YES 00:04:16.974 Compiler for C supports arguments -Wwrite-strings: YES 00:04:16.974 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:16.974 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:16.974 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:16.974 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:16.974 Program objdump found: YES (/usr/bin/objdump) 00:04:16.974 Compiler for C supports arguments -mavx512f: YES 00:04:16.974 Checking if "AVX512 checking" compiles: YES 00:04:16.974 Fetching value of define "__SSE4_2__" : 1 00:04:16.974 Fetching value of define "__AES__" : 1 00:04:16.974 Fetching value of define "__AVX__" : 1 00:04:16.974 Fetching value of define "__AVX2__" : (undefined) 00:04:16.974 Fetching value of define "__AVX512BW__" : (undefined) 00:04:16.974 Fetching value of define "__AVX512CD__" : (undefined) 00:04:16.974 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:16.974 Fetching value of define "__AVX512F__" : (undefined) 00:04:16.974 Fetching value of define "__AVX512VL__" : (undefined) 00:04:16.974 Fetching value of define "__PCLMUL__" : 1 00:04:16.974 Fetching value of define "__RDRND__" : 1 00:04:16.974 Fetching value of define "__RDSEED__" : (undefined) 00:04:16.974 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:16.974 Fetching value of define "__znver1__" : (undefined) 00:04:16.974 Fetching value of define "__znver2__" : (undefined) 00:04:16.974 Fetching value of define "__znver3__" : (undefined) 00:04:16.974 Fetching value of define "__znver4__" : (undefined) 00:04:16.974 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:16.974 Message: lib/log: Defining dependency "log" 00:04:16.974 Message: lib/kvargs: Defining dependency "kvargs" 00:04:16.974 Message: lib/telemetry: Defining dependency "telemetry" 00:04:16.974 Checking for function "getentropy" : NO 00:04:16.974 Message: lib/eal: Defining dependency "eal" 00:04:16.974 Message: lib/ring: Defining dependency "ring" 00:04:16.974 Message: lib/rcu: Defining dependency "rcu" 00:04:16.974 Message: lib/mempool: Defining dependency "mempool" 00:04:16.974 Message: lib/mbuf: Defining dependency "mbuf" 00:04:16.974 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:16.974 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:16.974 Compiler for C supports arguments -mpclmul: YES 00:04:16.974 Compiler for C supports arguments -maes: YES 00:04:16.974 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:16.974 Compiler for C supports arguments -mavx512bw: YES 00:04:16.974 Compiler for C supports arguments -mavx512dq: YES 00:04:16.974 Compiler for C supports arguments -mavx512vl: YES 00:04:16.974 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:16.974 Compiler for C supports arguments -mavx2: YES 00:04:16.974 Compiler for C supports arguments -mavx: YES 00:04:16.974 Message: lib/net: Defining dependency "net" 00:04:16.974 Message: lib/meter: Defining dependency "meter" 00:04:16.974 Message: lib/ethdev: Defining dependency "ethdev" 00:04:16.974 Message: lib/pci: Defining dependency "pci" 00:04:16.975 Message: lib/cmdline: Defining dependency "cmdline" 00:04:16.975 Message: lib/hash: Defining dependency "hash" 00:04:16.975 Message: lib/timer: Defining dependency "timer" 00:04:16.975 Message: lib/compressdev: Defining dependency "compressdev" 00:04:16.975 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:16.975 Message: lib/dmadev: Defining dependency "dmadev" 00:04:16.975 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:16.975 Message: lib/power: Defining dependency "power" 00:04:16.975 Message: lib/reorder: Defining dependency "reorder" 00:04:16.975 Message: lib/security: Defining dependency "security" 00:04:16.975 Has header "linux/userfaultfd.h" : YES 00:04:16.975 Has header "linux/vduse.h" : YES 00:04:16.975 Message: lib/vhost: Defining dependency "vhost" 00:04:16.975 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:16.975 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:16.975 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:16.975 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:16.975 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:16.975 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:16.975 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:16.975 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:16.975 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:16.975 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:16.975 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:16.975 Configuring doxy-api-html.conf using configuration 00:04:16.975 Configuring doxy-api-man.conf using configuration 00:04:16.975 Program mandb found: YES (/usr/bin/mandb) 00:04:16.975 Program sphinx-build found: NO 00:04:16.975 Configuring rte_build_config.h using configuration 00:04:16.975 Message: 00:04:16.975 ================= 00:04:16.975 Applications Enabled 00:04:16.975 ================= 00:04:16.975 00:04:16.975 apps: 00:04:16.975 00:04:16.975 00:04:16.975 Message: 00:04:16.975 ================= 00:04:16.975 Libraries Enabled 00:04:16.975 ================= 00:04:16.975 00:04:16.975 libs: 00:04:16.975 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:16.975 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:16.975 cryptodev, dmadev, power, reorder, security, vhost, 00:04:16.975 00:04:16.975 Message: 00:04:16.975 =============== 00:04:16.975 Drivers Enabled 00:04:16.975 =============== 00:04:16.975 00:04:16.975 common: 00:04:16.975 00:04:16.975 bus: 00:04:16.975 pci, vdev, 00:04:16.975 mempool: 00:04:16.975 ring, 00:04:16.975 dma: 00:04:16.975 00:04:16.975 net: 00:04:16.975 00:04:16.975 crypto: 00:04:16.975 00:04:16.975 compress: 00:04:16.975 00:04:16.975 vdpa: 00:04:16.975 00:04:16.975 00:04:16.975 Message: 00:04:16.975 ================= 00:04:16.975 Content Skipped 00:04:16.975 ================= 00:04:16.975 00:04:16.975 apps: 00:04:16.975 dumpcap: explicitly disabled via build config 00:04:16.975 graph: explicitly disabled via build config 00:04:16.975 pdump: explicitly disabled via build config 00:04:16.975 proc-info: explicitly disabled via build config 00:04:16.975 test-acl: explicitly disabled via build config 00:04:16.975 test-bbdev: explicitly disabled via build config 00:04:16.975 test-cmdline: explicitly disabled via build config 00:04:16.975 test-compress-perf: explicitly disabled via build config 00:04:16.975 test-crypto-perf: explicitly disabled via build config 00:04:16.975 test-dma-perf: explicitly disabled via build config 00:04:16.975 test-eventdev: explicitly disabled via build config 00:04:16.975 test-fib: explicitly disabled via build config 00:04:16.975 test-flow-perf: explicitly disabled via build config 00:04:16.975 test-gpudev: explicitly disabled via build config 00:04:16.975 test-mldev: explicitly disabled via build config 00:04:16.975 test-pipeline: explicitly disabled via build config 00:04:16.975 test-pmd: explicitly disabled via build config 00:04:16.975 test-regex: explicitly disabled via build config 00:04:16.975 test-sad: explicitly disabled via build config 00:04:16.975 test-security-perf: explicitly disabled via build config 00:04:16.975 00:04:16.975 libs: 00:04:16.975 argparse: explicitly disabled via build config 00:04:16.975 metrics: explicitly disabled via build config 00:04:16.975 acl: explicitly disabled via build config 00:04:16.975 bbdev: explicitly disabled via build config 00:04:16.975 bitratestats: explicitly disabled via build config 00:04:16.975 bpf: explicitly disabled via build config 00:04:16.975 cfgfile: explicitly disabled via build config 00:04:16.975 distributor: explicitly disabled via build config 00:04:16.975 efd: explicitly disabled via build config 00:04:16.975 eventdev: explicitly disabled via build config 00:04:16.975 dispatcher: explicitly disabled via build config 00:04:16.975 gpudev: explicitly disabled via build config 00:04:16.975 gro: explicitly disabled via build config 00:04:16.975 gso: explicitly disabled via build config 00:04:16.975 ip_frag: explicitly disabled via build config 00:04:16.975 jobstats: explicitly disabled via build config 00:04:16.975 latencystats: explicitly disabled via build config 00:04:16.975 lpm: explicitly disabled via build config 00:04:16.975 member: explicitly disabled via build config 00:04:16.975 pcapng: explicitly disabled via build config 00:04:16.975 rawdev: explicitly disabled via build config 00:04:16.975 regexdev: explicitly disabled via build config 00:04:16.975 mldev: explicitly disabled via build config 00:04:16.975 rib: explicitly disabled via build config 00:04:16.975 sched: explicitly disabled via build config 00:04:16.975 stack: explicitly disabled via build config 00:04:16.975 ipsec: explicitly disabled via build config 00:04:16.975 pdcp: explicitly disabled via build config 00:04:16.975 fib: explicitly disabled via build config 00:04:16.975 port: explicitly disabled via build config 00:04:16.975 pdump: explicitly disabled via build config 00:04:16.975 table: explicitly disabled via build config 00:04:16.975 pipeline: explicitly disabled via build config 00:04:16.975 graph: explicitly disabled via build config 00:04:16.975 node: explicitly disabled via build config 00:04:16.975 00:04:16.975 drivers: 00:04:16.975 common/cpt: not in enabled drivers build config 00:04:16.975 common/dpaax: not in enabled drivers build config 00:04:16.975 common/iavf: not in enabled drivers build config 00:04:16.975 common/idpf: not in enabled drivers build config 00:04:16.975 common/ionic: not in enabled drivers build config 00:04:16.975 common/mvep: not in enabled drivers build config 00:04:16.975 common/octeontx: not in enabled drivers build config 00:04:16.975 bus/auxiliary: not in enabled drivers build config 00:04:16.975 bus/cdx: not in enabled drivers build config 00:04:16.975 bus/dpaa: not in enabled drivers build config 00:04:16.975 bus/fslmc: not in enabled drivers build config 00:04:16.975 bus/ifpga: not in enabled drivers build config 00:04:16.975 bus/platform: not in enabled drivers build config 00:04:16.975 bus/uacce: not in enabled drivers build config 00:04:16.975 bus/vmbus: not in enabled drivers build config 00:04:16.975 common/cnxk: not in enabled drivers build config 00:04:16.975 common/mlx5: not in enabled drivers build config 00:04:16.975 common/nfp: not in enabled drivers build config 00:04:16.975 common/nitrox: not in enabled drivers build config 00:04:16.975 common/qat: not in enabled drivers build config 00:04:16.975 common/sfc_efx: not in enabled drivers build config 00:04:16.975 mempool/bucket: not in enabled drivers build config 00:04:16.975 mempool/cnxk: not in enabled drivers build config 00:04:16.975 mempool/dpaa: not in enabled drivers build config 00:04:16.975 mempool/dpaa2: not in enabled drivers build config 00:04:16.975 mempool/octeontx: not in enabled drivers build config 00:04:16.975 mempool/stack: not in enabled drivers build config 00:04:16.975 dma/cnxk: not in enabled drivers build config 00:04:16.975 dma/dpaa: not in enabled drivers build config 00:04:16.975 dma/dpaa2: not in enabled drivers build config 00:04:16.975 dma/hisilicon: not in enabled drivers build config 00:04:16.975 dma/idxd: not in enabled drivers build config 00:04:16.975 dma/ioat: not in enabled drivers build config 00:04:16.975 dma/skeleton: not in enabled drivers build config 00:04:16.975 net/af_packet: not in enabled drivers build config 00:04:16.975 net/af_xdp: not in enabled drivers build config 00:04:16.975 net/ark: not in enabled drivers build config 00:04:16.975 net/atlantic: not in enabled drivers build config 00:04:16.975 net/avp: not in enabled drivers build config 00:04:16.975 net/axgbe: not in enabled drivers build config 00:04:16.975 net/bnx2x: not in enabled drivers build config 00:04:16.975 net/bnxt: not in enabled drivers build config 00:04:16.975 net/bonding: not in enabled drivers build config 00:04:16.975 net/cnxk: not in enabled drivers build config 00:04:16.975 net/cpfl: not in enabled drivers build config 00:04:16.975 net/cxgbe: not in enabled drivers build config 00:04:16.975 net/dpaa: not in enabled drivers build config 00:04:16.975 net/dpaa2: not in enabled drivers build config 00:04:16.975 net/e1000: not in enabled drivers build config 00:04:16.975 net/ena: not in enabled drivers build config 00:04:16.975 net/enetc: not in enabled drivers build config 00:04:16.975 net/enetfec: not in enabled drivers build config 00:04:16.975 net/enic: not in enabled drivers build config 00:04:16.975 net/failsafe: not in enabled drivers build config 00:04:16.975 net/fm10k: not in enabled drivers build config 00:04:16.975 net/gve: not in enabled drivers build config 00:04:16.975 net/hinic: not in enabled drivers build config 00:04:16.975 net/hns3: not in enabled drivers build config 00:04:16.975 net/i40e: not in enabled drivers build config 00:04:16.975 net/iavf: not in enabled drivers build config 00:04:16.975 net/ice: not in enabled drivers build config 00:04:16.975 net/idpf: not in enabled drivers build config 00:04:16.975 net/igc: not in enabled drivers build config 00:04:16.975 net/ionic: not in enabled drivers build config 00:04:16.975 net/ipn3ke: not in enabled drivers build config 00:04:16.975 net/ixgbe: not in enabled drivers build config 00:04:16.976 net/mana: not in enabled drivers build config 00:04:16.976 net/memif: not in enabled drivers build config 00:04:16.976 net/mlx4: not in enabled drivers build config 00:04:16.976 net/mlx5: not in enabled drivers build config 00:04:16.976 net/mvneta: not in enabled drivers build config 00:04:16.976 net/mvpp2: not in enabled drivers build config 00:04:16.976 net/netvsc: not in enabled drivers build config 00:04:16.976 net/nfb: not in enabled drivers build config 00:04:16.976 net/nfp: not in enabled drivers build config 00:04:16.976 net/ngbe: not in enabled drivers build config 00:04:16.976 net/null: not in enabled drivers build config 00:04:16.976 net/octeontx: not in enabled drivers build config 00:04:16.976 net/octeon_ep: not in enabled drivers build config 00:04:16.976 net/pcap: not in enabled drivers build config 00:04:16.976 net/pfe: not in enabled drivers build config 00:04:16.976 net/qede: not in enabled drivers build config 00:04:16.976 net/ring: not in enabled drivers build config 00:04:16.976 net/sfc: not in enabled drivers build config 00:04:16.976 net/softnic: not in enabled drivers build config 00:04:16.976 net/tap: not in enabled drivers build config 00:04:16.976 net/thunderx: not in enabled drivers build config 00:04:16.976 net/txgbe: not in enabled drivers build config 00:04:16.976 net/vdev_netvsc: not in enabled drivers build config 00:04:16.976 net/vhost: not in enabled drivers build config 00:04:16.976 net/virtio: not in enabled drivers build config 00:04:16.976 net/vmxnet3: not in enabled drivers build config 00:04:16.976 raw/*: missing internal dependency, "rawdev" 00:04:16.976 crypto/armv8: not in enabled drivers build config 00:04:16.976 crypto/bcmfs: not in enabled drivers build config 00:04:16.976 crypto/caam_jr: not in enabled drivers build config 00:04:16.976 crypto/ccp: not in enabled drivers build config 00:04:16.976 crypto/cnxk: not in enabled drivers build config 00:04:16.976 crypto/dpaa_sec: not in enabled drivers build config 00:04:16.976 crypto/dpaa2_sec: not in enabled drivers build config 00:04:16.976 crypto/ipsec_mb: not in enabled drivers build config 00:04:16.976 crypto/mlx5: not in enabled drivers build config 00:04:16.976 crypto/mvsam: not in enabled drivers build config 00:04:16.976 crypto/nitrox: not in enabled drivers build config 00:04:16.976 crypto/null: not in enabled drivers build config 00:04:16.976 crypto/octeontx: not in enabled drivers build config 00:04:16.976 crypto/openssl: not in enabled drivers build config 00:04:16.976 crypto/scheduler: not in enabled drivers build config 00:04:16.976 crypto/uadk: not in enabled drivers build config 00:04:16.976 crypto/virtio: not in enabled drivers build config 00:04:16.976 compress/isal: not in enabled drivers build config 00:04:16.976 compress/mlx5: not in enabled drivers build config 00:04:16.976 compress/nitrox: not in enabled drivers build config 00:04:16.976 compress/octeontx: not in enabled drivers build config 00:04:16.976 compress/zlib: not in enabled drivers build config 00:04:16.976 regex/*: missing internal dependency, "regexdev" 00:04:16.976 ml/*: missing internal dependency, "mldev" 00:04:16.976 vdpa/ifc: not in enabled drivers build config 00:04:16.976 vdpa/mlx5: not in enabled drivers build config 00:04:16.976 vdpa/nfp: not in enabled drivers build config 00:04:16.976 vdpa/sfc: not in enabled drivers build config 00:04:16.976 event/*: missing internal dependency, "eventdev" 00:04:16.976 baseband/*: missing internal dependency, "bbdev" 00:04:16.976 gpu/*: missing internal dependency, "gpudev" 00:04:16.976 00:04:16.976 00:04:16.976 Build targets in project: 85 00:04:16.976 00:04:16.976 DPDK 24.03.0 00:04:16.976 00:04:16.976 User defined options 00:04:16.976 buildtype : debug 00:04:16.976 default_library : shared 00:04:16.976 libdir : lib 00:04:16.976 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:16.976 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:16.976 c_link_args : 00:04:16.976 cpu_instruction_set: native 00:04:16.976 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:04:16.976 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:04:16.976 enable_docs : false 00:04:16.976 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:16.976 enable_kmods : false 00:04:16.976 max_lcores : 128 00:04:16.976 tests : false 00:04:16.976 00:04:16.976 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:17.549 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:17.549 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:17.549 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:17.549 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:17.549 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:17.549 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:17.549 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:17.549 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:17.549 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:17.549 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:17.549 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:17.549 [11/268] Linking static target lib/librte_kvargs.a 00:04:17.549 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:17.549 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:17.549 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:17.549 [15/268] Linking static target lib/librte_log.a 00:04:17.549 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:18.119 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.380 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:18.380 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:18.380 [20/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:18.380 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:18.380 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:18.380 [23/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:18.380 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:18.380 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:18.380 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:18.380 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:18.380 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:18.380 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:18.380 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:18.380 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:18.380 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:18.380 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:18.380 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:18.380 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:18.380 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:18.380 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:18.380 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:18.380 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:18.380 [40/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:18.380 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:18.380 [42/268] Linking static target lib/librte_telemetry.a 00:04:18.380 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:18.380 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:18.380 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:18.380 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:18.380 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:18.380 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:18.380 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:18.380 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:18.380 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:18.380 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:18.380 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:18.380 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:18.380 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:18.640 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:18.640 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:18.640 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:18.640 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:18.640 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:18.640 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:18.640 [62/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.902 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:18.902 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:18.902 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:18.902 [66/268] Linking target lib/librte_log.so.24.1 00:04:18.902 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:18.902 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:18.902 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:18.902 [70/268] Linking static target lib/librte_pci.a 00:04:19.165 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:19.165 [72/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:19.165 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:19.165 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:19.165 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:19.165 [76/268] Linking target lib/librte_kvargs.so.24.1 00:04:19.165 [77/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:19.166 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:19.166 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:19.166 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:19.166 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:19.166 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:19.166 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:19.456 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:19.456 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:19.456 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:19.456 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:19.456 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:19.456 [89/268] Linking static target lib/librte_ring.a 00:04:19.456 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:19.456 [91/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.456 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:19.456 [93/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:19.456 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:19.456 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:19.456 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:19.456 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:19.456 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:19.456 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:19.456 [100/268] Linking target lib/librte_telemetry.so.24.1 00:04:19.456 [101/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:19.456 [102/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:19.456 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:19.456 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:19.456 [105/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:19.456 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:19.456 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:19.456 [108/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.456 [109/268] Linking static target lib/librte_meter.a 00:04:19.456 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:19.456 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:19.719 [112/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:19.719 [113/268] Linking static target lib/librte_rcu.a 00:04:19.719 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:19.719 [115/268] Linking static target lib/librte_eal.a 00:04:19.719 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:19.719 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:19.719 [118/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:19.719 [119/268] Linking static target lib/librte_mempool.a 00:04:19.719 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:19.719 [121/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:19.719 [122/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:19.719 [123/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:19.719 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:19.719 [125/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:19.719 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:19.719 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:19.984 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:19.984 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:19.984 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:19.984 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:19.984 [132/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:19.984 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:19.984 [134/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.984 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:19.984 [136/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:19.984 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:19.984 [138/268] Linking static target lib/librte_net.a 00:04:20.245 [139/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.245 [140/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:20.245 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:20.245 [142/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.245 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:20.245 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:20.245 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:20.245 [146/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:20.245 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:20.513 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:20.513 [149/268] Linking static target lib/librte_cmdline.a 00:04:20.513 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:20.513 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:20.513 [152/268] Linking static target lib/librte_timer.a 00:04:20.513 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:20.513 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:20.513 [155/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:20.513 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:20.513 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:20.513 [158/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.513 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:20.771 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:20.771 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:20.771 [162/268] Linking static target lib/librte_dmadev.a 00:04:20.771 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:20.771 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:20.771 [165/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:20.771 [166/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:20.771 [167/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.771 [168/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:20.771 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:20.771 [170/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:20.771 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:20.771 [172/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.771 [173/268] Linking static target lib/librte_power.a 00:04:21.030 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:21.030 [175/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:21.030 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:21.030 [177/268] Linking static target lib/librte_hash.a 00:04:21.030 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:21.030 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:21.030 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:21.030 [181/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:21.030 [182/268] Linking static target lib/librte_compressdev.a 00:04:21.030 [183/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:21.030 [184/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:21.030 [185/268] Linking static target lib/librte_mbuf.a 00:04:21.030 [186/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:21.030 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:21.030 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:21.288 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:21.288 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:21.288 [191/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:21.288 [192/268] Linking static target lib/librte_reorder.a 00:04:21.288 [193/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:21.288 [194/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.288 [195/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.288 [196/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:21.288 [197/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:21.288 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:21.288 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:21.288 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:21.288 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:21.288 [202/268] Linking static target drivers/librte_bus_vdev.a 00:04:21.288 [203/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:21.545 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:21.545 [205/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:21.545 [206/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.545 [207/268] Linking static target lib/librte_security.a 00:04:21.545 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:21.545 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:21.545 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:21.545 [211/268] Linking static target drivers/librte_bus_pci.a 00:04:21.545 [212/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.545 [213/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.545 [214/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:21.545 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.545 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:21.545 [217/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:21.545 [218/268] Linking static target drivers/librte_mempool_ring.a 00:04:21.545 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.545 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.803 [221/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:21.803 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:21.803 [223/268] Linking static target lib/librte_cryptodev.a 00:04:21.803 [224/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.803 [225/268] Linking static target lib/librte_ethdev.a 00:04:22.060 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.995 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.367 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:25.740 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.740 [230/268] Linking target lib/librte_eal.so.24.1 00:04:26.025 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.025 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:26.025 [233/268] Linking target lib/librte_timer.so.24.1 00:04:26.025 [234/268] Linking target lib/librte_meter.so.24.1 00:04:26.025 [235/268] Linking target lib/librte_ring.so.24.1 00:04:26.025 [236/268] Linking target lib/librte_pci.so.24.1 00:04:26.025 [237/268] Linking target lib/librte_dmadev.so.24.1 00:04:26.025 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:26.025 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:26.025 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:26.283 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:26.283 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:26.283 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:26.283 [244/268] Linking target lib/librte_rcu.so.24.1 00:04:26.283 [245/268] Linking target lib/librte_mempool.so.24.1 00:04:26.283 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:26.283 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:26.283 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:26.283 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:26.283 [250/268] Linking target lib/librte_mbuf.so.24.1 00:04:26.541 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:26.541 [252/268] Linking target lib/librte_compressdev.so.24.1 00:04:26.541 [253/268] Linking target lib/librte_reorder.so.24.1 00:04:26.541 [254/268] Linking target lib/librte_net.so.24.1 00:04:26.541 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:04:26.541 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:26.541 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:26.798 [258/268] Linking target lib/librte_security.so.24.1 00:04:26.798 [259/268] Linking target lib/librte_cmdline.so.24.1 00:04:26.798 [260/268] Linking target lib/librte_hash.so.24.1 00:04:26.798 [261/268] Linking target lib/librte_ethdev.so.24.1 00:04:26.798 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:26.798 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:26.798 [264/268] Linking target lib/librte_power.so.24.1 00:04:30.076 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:30.076 [266/268] Linking static target lib/librte_vhost.a 00:04:31.008 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.265 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:31.265 INFO: autodetecting backend as ninja 00:04:31.265 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:04:53.202 CC lib/ut_mock/mock.o 00:04:53.202 CC lib/log/log.o 00:04:53.202 CC lib/ut/ut.o 00:04:53.202 CC lib/log/log_flags.o 00:04:53.202 CC lib/log/log_deprecated.o 00:04:53.202 LIB libspdk_ut.a 00:04:53.202 LIB libspdk_ut_mock.a 00:04:53.202 LIB libspdk_log.a 00:04:53.202 SO libspdk_ut.so.2.0 00:04:53.202 SO libspdk_ut_mock.so.6.0 00:04:53.202 SO libspdk_log.so.7.1 00:04:53.202 SYMLINK libspdk_ut.so 00:04:53.202 SYMLINK libspdk_ut_mock.so 00:04:53.202 SYMLINK libspdk_log.so 00:04:53.202 CC lib/dma/dma.o 00:04:53.202 CXX lib/trace_parser/trace.o 00:04:53.202 CC lib/util/base64.o 00:04:53.202 CC lib/ioat/ioat.o 00:04:53.202 CC lib/util/bit_array.o 00:04:53.202 CC lib/util/cpuset.o 00:04:53.202 CC lib/util/crc16.o 00:04:53.202 CC lib/util/crc32.o 00:04:53.202 CC lib/util/crc32c.o 00:04:53.202 CC lib/util/crc32_ieee.o 00:04:53.202 CC lib/util/crc64.o 00:04:53.202 CC lib/util/dif.o 00:04:53.202 CC lib/util/fd.o 00:04:53.202 CC lib/util/fd_group.o 00:04:53.202 CC lib/util/file.o 00:04:53.202 CC lib/util/hexlify.o 00:04:53.202 CC lib/util/iov.o 00:04:53.202 CC lib/util/math.o 00:04:53.202 CC lib/util/net.o 00:04:53.202 CC lib/util/pipe.o 00:04:53.202 CC lib/util/strerror_tls.o 00:04:53.202 CC lib/util/string.o 00:04:53.202 CC lib/util/uuid.o 00:04:53.202 CC lib/util/xor.o 00:04:53.202 CC lib/util/zipf.o 00:04:53.202 CC lib/util/md5.o 00:04:53.202 CC lib/vfio_user/host/vfio_user_pci.o 00:04:53.202 CC lib/vfio_user/host/vfio_user.o 00:04:53.202 LIB libspdk_dma.a 00:04:53.202 SO libspdk_dma.so.5.0 00:04:53.202 LIB libspdk_ioat.a 00:04:53.202 SO libspdk_ioat.so.7.0 00:04:53.202 SYMLINK libspdk_dma.so 00:04:53.202 SYMLINK libspdk_ioat.so 00:04:53.202 LIB libspdk_vfio_user.a 00:04:53.202 SO libspdk_vfio_user.so.5.0 00:04:53.202 SYMLINK libspdk_vfio_user.so 00:04:53.202 LIB libspdk_util.a 00:04:53.202 SO libspdk_util.so.10.1 00:04:53.202 SYMLINK libspdk_util.so 00:04:53.202 CC lib/conf/conf.o 00:04:53.202 CC lib/idxd/idxd.o 00:04:53.202 CC lib/env_dpdk/env.o 00:04:53.202 CC lib/vmd/vmd.o 00:04:53.202 CC lib/vmd/led.o 00:04:53.202 CC lib/env_dpdk/memory.o 00:04:53.203 CC lib/idxd/idxd_user.o 00:04:53.203 CC lib/rdma_utils/rdma_utils.o 00:04:53.203 CC lib/json/json_parse.o 00:04:53.203 CC lib/env_dpdk/pci.o 00:04:53.203 CC lib/idxd/idxd_kernel.o 00:04:53.203 CC lib/env_dpdk/init.o 00:04:53.203 CC lib/json/json_util.o 00:04:53.203 CC lib/env_dpdk/threads.o 00:04:53.203 CC lib/json/json_write.o 00:04:53.203 CC lib/env_dpdk/pci_ioat.o 00:04:53.203 CC lib/env_dpdk/pci_virtio.o 00:04:53.203 CC lib/env_dpdk/pci_vmd.o 00:04:53.203 CC lib/env_dpdk/pci_idxd.o 00:04:53.203 CC lib/env_dpdk/pci_event.o 00:04:53.203 CC lib/env_dpdk/sigbus_handler.o 00:04:53.203 CC lib/env_dpdk/pci_dpdk.o 00:04:53.203 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:53.203 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:53.203 LIB libspdk_trace_parser.a 00:04:53.203 SO libspdk_trace_parser.so.6.0 00:04:53.203 SYMLINK libspdk_trace_parser.so 00:04:53.203 LIB libspdk_conf.a 00:04:53.203 SO libspdk_conf.so.6.0 00:04:53.203 LIB libspdk_rdma_utils.a 00:04:53.203 LIB libspdk_json.a 00:04:53.203 SYMLINK libspdk_conf.so 00:04:53.203 SO libspdk_rdma_utils.so.1.0 00:04:53.203 SO libspdk_json.so.6.0 00:04:53.203 SYMLINK libspdk_rdma_utils.so 00:04:53.203 SYMLINK libspdk_json.so 00:04:53.203 CC lib/rdma_provider/common.o 00:04:53.203 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:53.203 CC lib/jsonrpc/jsonrpc_server.o 00:04:53.203 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:53.203 CC lib/jsonrpc/jsonrpc_client.o 00:04:53.203 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:53.203 LIB libspdk_idxd.a 00:04:53.203 SO libspdk_idxd.so.12.1 00:04:53.203 LIB libspdk_vmd.a 00:04:53.203 SYMLINK libspdk_idxd.so 00:04:53.203 SO libspdk_vmd.so.6.0 00:04:53.203 SYMLINK libspdk_vmd.so 00:04:53.203 LIB libspdk_rdma_provider.a 00:04:53.203 SO libspdk_rdma_provider.so.7.0 00:04:53.203 LIB libspdk_jsonrpc.a 00:04:53.203 SYMLINK libspdk_rdma_provider.so 00:04:53.203 SO libspdk_jsonrpc.so.6.0 00:04:53.203 SYMLINK libspdk_jsonrpc.so 00:04:53.203 CC lib/rpc/rpc.o 00:04:53.203 LIB libspdk_rpc.a 00:04:53.203 SO libspdk_rpc.so.6.0 00:04:53.203 SYMLINK libspdk_rpc.so 00:04:53.461 CC lib/trace/trace.o 00:04:53.461 CC lib/trace/trace_flags.o 00:04:53.461 CC lib/keyring/keyring.o 00:04:53.461 CC lib/trace/trace_rpc.o 00:04:53.461 CC lib/keyring/keyring_rpc.o 00:04:53.461 CC lib/notify/notify.o 00:04:53.461 CC lib/notify/notify_rpc.o 00:04:53.461 LIB libspdk_notify.a 00:04:53.461 SO libspdk_notify.so.6.0 00:04:53.461 SYMLINK libspdk_notify.so 00:04:53.719 LIB libspdk_keyring.a 00:04:53.719 LIB libspdk_trace.a 00:04:53.719 SO libspdk_keyring.so.2.0 00:04:53.719 SO libspdk_trace.so.11.0 00:04:53.719 SYMLINK libspdk_keyring.so 00:04:53.719 SYMLINK libspdk_trace.so 00:04:53.978 CC lib/thread/thread.o 00:04:53.978 CC lib/thread/iobuf.o 00:04:53.978 CC lib/sock/sock.o 00:04:53.978 CC lib/sock/sock_rpc.o 00:04:53.978 LIB libspdk_env_dpdk.a 00:04:53.978 SO libspdk_env_dpdk.so.15.1 00:04:53.978 SYMLINK libspdk_env_dpdk.so 00:04:54.237 LIB libspdk_sock.a 00:04:54.237 SO libspdk_sock.so.10.0 00:04:54.237 SYMLINK libspdk_sock.so 00:04:54.496 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:54.496 CC lib/nvme/nvme_ctrlr.o 00:04:54.496 CC lib/nvme/nvme_fabric.o 00:04:54.496 CC lib/nvme/nvme_ns_cmd.o 00:04:54.496 CC lib/nvme/nvme_ns.o 00:04:54.496 CC lib/nvme/nvme_pcie_common.o 00:04:54.496 CC lib/nvme/nvme_pcie.o 00:04:54.496 CC lib/nvme/nvme_qpair.o 00:04:54.496 CC lib/nvme/nvme.o 00:04:54.496 CC lib/nvme/nvme_quirks.o 00:04:54.496 CC lib/nvme/nvme_transport.o 00:04:54.496 CC lib/nvme/nvme_discovery.o 00:04:54.496 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:54.496 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:54.496 CC lib/nvme/nvme_tcp.o 00:04:54.496 CC lib/nvme/nvme_opal.o 00:04:54.496 CC lib/nvme/nvme_io_msg.o 00:04:54.496 CC lib/nvme/nvme_poll_group.o 00:04:54.496 CC lib/nvme/nvme_zns.o 00:04:54.497 CC lib/nvme/nvme_stubs.o 00:04:54.497 CC lib/nvme/nvme_auth.o 00:04:54.497 CC lib/nvme/nvme_cuse.o 00:04:54.497 CC lib/nvme/nvme_vfio_user.o 00:04:54.497 CC lib/nvme/nvme_rdma.o 00:04:55.432 LIB libspdk_thread.a 00:04:55.432 SO libspdk_thread.so.11.0 00:04:55.689 SYMLINK libspdk_thread.so 00:04:55.689 CC lib/init/json_config.o 00:04:55.689 CC lib/fsdev/fsdev.o 00:04:55.689 CC lib/accel/accel.o 00:04:55.689 CC lib/blob/blobstore.o 00:04:55.689 CC lib/virtio/virtio.o 00:04:55.689 CC lib/vfu_tgt/tgt_endpoint.o 00:04:55.689 CC lib/fsdev/fsdev_io.o 00:04:55.689 CC lib/blob/request.o 00:04:55.689 CC lib/init/subsystem.o 00:04:55.689 CC lib/accel/accel_rpc.o 00:04:55.689 CC lib/virtio/virtio_vhost_user.o 00:04:55.689 CC lib/vfu_tgt/tgt_rpc.o 00:04:55.689 CC lib/fsdev/fsdev_rpc.o 00:04:55.689 CC lib/accel/accel_sw.o 00:04:55.689 CC lib/blob/zeroes.o 00:04:55.689 CC lib/init/subsystem_rpc.o 00:04:55.689 CC lib/virtio/virtio_vfio_user.o 00:04:55.689 CC lib/virtio/virtio_pci.o 00:04:55.689 CC lib/blob/blob_bs_dev.o 00:04:55.689 CC lib/init/rpc.o 00:04:55.946 LIB libspdk_init.a 00:04:55.946 SO libspdk_init.so.6.0 00:04:56.205 LIB libspdk_virtio.a 00:04:56.205 LIB libspdk_vfu_tgt.a 00:04:56.205 SYMLINK libspdk_init.so 00:04:56.205 SO libspdk_virtio.so.7.0 00:04:56.205 SO libspdk_vfu_tgt.so.3.0 00:04:56.205 SYMLINK libspdk_vfu_tgt.so 00:04:56.205 SYMLINK libspdk_virtio.so 00:04:56.205 CC lib/event/app.o 00:04:56.205 CC lib/event/reactor.o 00:04:56.205 CC lib/event/log_rpc.o 00:04:56.205 CC lib/event/app_rpc.o 00:04:56.205 CC lib/event/scheduler_static.o 00:04:56.464 LIB libspdk_fsdev.a 00:04:56.464 SO libspdk_fsdev.so.2.0 00:04:56.464 SYMLINK libspdk_fsdev.so 00:04:56.722 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:56.722 LIB libspdk_event.a 00:04:56.722 SO libspdk_event.so.14.0 00:04:56.722 SYMLINK libspdk_event.so 00:04:56.980 LIB libspdk_accel.a 00:04:56.981 SO libspdk_accel.so.16.0 00:04:56.981 SYMLINK libspdk_accel.so 00:04:57.238 LIB libspdk_nvme.a 00:04:57.238 CC lib/bdev/bdev.o 00:04:57.238 CC lib/bdev/bdev_rpc.o 00:04:57.238 CC lib/bdev/bdev_zone.o 00:04:57.238 CC lib/bdev/part.o 00:04:57.238 CC lib/bdev/scsi_nvme.o 00:04:57.238 SO libspdk_nvme.so.15.0 00:04:57.238 LIB libspdk_fuse_dispatcher.a 00:04:57.496 SO libspdk_fuse_dispatcher.so.1.0 00:04:57.496 SYMLINK libspdk_fuse_dispatcher.so 00:04:57.496 SYMLINK libspdk_nvme.so 00:04:58.869 LIB libspdk_blob.a 00:04:58.869 SO libspdk_blob.so.12.0 00:04:58.869 SYMLINK libspdk_blob.so 00:04:59.126 CC lib/lvol/lvol.o 00:04:59.126 CC lib/blobfs/blobfs.o 00:04:59.126 CC lib/blobfs/tree.o 00:05:00.162 LIB libspdk_bdev.a 00:05:00.162 LIB libspdk_blobfs.a 00:05:00.162 SO libspdk_bdev.so.17.0 00:05:00.162 SO libspdk_blobfs.so.11.0 00:05:00.162 SYMLINK libspdk_blobfs.so 00:05:00.162 SYMLINK libspdk_bdev.so 00:05:00.162 LIB libspdk_lvol.a 00:05:00.162 SO libspdk_lvol.so.11.0 00:05:00.162 SYMLINK libspdk_lvol.so 00:05:00.162 CC lib/scsi/dev.o 00:05:00.162 CC lib/nbd/nbd.o 00:05:00.162 CC lib/scsi/lun.o 00:05:00.162 CC lib/nbd/nbd_rpc.o 00:05:00.162 CC lib/ublk/ublk.o 00:05:00.162 CC lib/nvmf/ctrlr.o 00:05:00.162 CC lib/scsi/port.o 00:05:00.162 CC lib/ublk/ublk_rpc.o 00:05:00.162 CC lib/ftl/ftl_core.o 00:05:00.162 CC lib/nvmf/ctrlr_discovery.o 00:05:00.162 CC lib/scsi/scsi.o 00:05:00.162 CC lib/ftl/ftl_init.o 00:05:00.162 CC lib/nvmf/ctrlr_bdev.o 00:05:00.162 CC lib/scsi/scsi_bdev.o 00:05:00.162 CC lib/ftl/ftl_layout.o 00:05:00.162 CC lib/nvmf/subsystem.o 00:05:00.162 CC lib/scsi/scsi_pr.o 00:05:00.162 CC lib/ftl/ftl_debug.o 00:05:00.162 CC lib/nvmf/nvmf.o 00:05:00.162 CC lib/scsi/scsi_rpc.o 00:05:00.162 CC lib/ftl/ftl_io.o 00:05:00.162 CC lib/nvmf/nvmf_rpc.o 00:05:00.162 CC lib/ftl/ftl_sb.o 00:05:00.162 CC lib/scsi/task.o 00:05:00.162 CC lib/ftl/ftl_l2p.o 00:05:00.162 CC lib/nvmf/transport.o 00:05:00.162 CC lib/ftl/ftl_l2p_flat.o 00:05:00.162 CC lib/ftl/ftl_nv_cache.o 00:05:00.162 CC lib/nvmf/tcp.o 00:05:00.162 CC lib/nvmf/stubs.o 00:05:00.162 CC lib/ftl/ftl_band.o 00:05:00.162 CC lib/nvmf/mdns_server.o 00:05:00.162 CC lib/ftl/ftl_band_ops.o 00:05:00.162 CC lib/nvmf/vfio_user.o 00:05:00.162 CC lib/ftl/ftl_writer.o 00:05:00.162 CC lib/nvmf/rdma.o 00:05:00.162 CC lib/ftl/ftl_rq.o 00:05:00.162 CC lib/nvmf/auth.o 00:05:00.162 CC lib/ftl/ftl_reloc.o 00:05:00.162 CC lib/ftl/ftl_l2p_cache.o 00:05:00.162 CC lib/ftl/ftl_p2l.o 00:05:00.162 CC lib/ftl/ftl_p2l_log.o 00:05:00.162 CC lib/ftl/mngt/ftl_mngt.o 00:05:00.162 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:00.162 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:00.162 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:00.162 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:00.162 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:00.738 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:00.738 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:00.738 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:00.738 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:00.738 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:00.738 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:00.738 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:00.738 CC lib/ftl/utils/ftl_conf.o 00:05:00.738 CC lib/ftl/utils/ftl_md.o 00:05:00.738 CC lib/ftl/utils/ftl_mempool.o 00:05:00.738 CC lib/ftl/utils/ftl_bitmap.o 00:05:00.738 CC lib/ftl/utils/ftl_property.o 00:05:00.738 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:00.738 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:00.738 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:00.738 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:00.738 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:00.738 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:00.738 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:00.738 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:00.997 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:00.997 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:00.997 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:00.997 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:00.997 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:00.997 CC lib/ftl/base/ftl_base_dev.o 00:05:00.997 CC lib/ftl/base/ftl_base_bdev.o 00:05:00.997 CC lib/ftl/ftl_trace.o 00:05:00.997 LIB libspdk_nbd.a 00:05:00.997 SO libspdk_nbd.so.7.0 00:05:01.255 LIB libspdk_scsi.a 00:05:01.255 SYMLINK libspdk_nbd.so 00:05:01.255 SO libspdk_scsi.so.9.0 00:05:01.255 SYMLINK libspdk_scsi.so 00:05:01.255 LIB libspdk_ublk.a 00:05:01.255 SO libspdk_ublk.so.3.0 00:05:01.513 SYMLINK libspdk_ublk.so 00:05:01.513 CC lib/vhost/vhost.o 00:05:01.513 CC lib/iscsi/conn.o 00:05:01.513 CC lib/iscsi/init_grp.o 00:05:01.513 CC lib/vhost/vhost_rpc.o 00:05:01.513 CC lib/vhost/vhost_scsi.o 00:05:01.513 CC lib/iscsi/iscsi.o 00:05:01.513 CC lib/iscsi/param.o 00:05:01.513 CC lib/vhost/vhost_blk.o 00:05:01.513 CC lib/iscsi/portal_grp.o 00:05:01.513 CC lib/vhost/rte_vhost_user.o 00:05:01.513 CC lib/iscsi/tgt_node.o 00:05:01.513 CC lib/iscsi/iscsi_subsystem.o 00:05:01.513 CC lib/iscsi/iscsi_rpc.o 00:05:01.513 CC lib/iscsi/task.o 00:05:01.771 LIB libspdk_ftl.a 00:05:01.771 SO libspdk_ftl.so.9.0 00:05:02.029 SYMLINK libspdk_ftl.so 00:05:02.602 LIB libspdk_vhost.a 00:05:02.602 SO libspdk_vhost.so.8.0 00:05:02.860 SYMLINK libspdk_vhost.so 00:05:02.860 LIB libspdk_nvmf.a 00:05:02.860 LIB libspdk_iscsi.a 00:05:02.860 SO libspdk_nvmf.so.20.0 00:05:02.860 SO libspdk_iscsi.so.8.0 00:05:03.118 SYMLINK libspdk_iscsi.so 00:05:03.118 SYMLINK libspdk_nvmf.so 00:05:03.376 CC module/vfu_device/vfu_virtio.o 00:05:03.376 CC module/vfu_device/vfu_virtio_blk.o 00:05:03.376 CC module/env_dpdk/env_dpdk_rpc.o 00:05:03.376 CC module/vfu_device/vfu_virtio_scsi.o 00:05:03.376 CC module/vfu_device/vfu_virtio_rpc.o 00:05:03.376 CC module/vfu_device/vfu_virtio_fs.o 00:05:03.376 CC module/accel/error/accel_error.o 00:05:03.376 CC module/accel/error/accel_error_rpc.o 00:05:03.376 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:03.376 CC module/accel/dsa/accel_dsa.o 00:05:03.376 CC module/keyring/linux/keyring.o 00:05:03.376 CC module/sock/posix/posix.o 00:05:03.376 CC module/scheduler/gscheduler/gscheduler.o 00:05:03.376 CC module/accel/dsa/accel_dsa_rpc.o 00:05:03.376 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:03.376 CC module/keyring/linux/keyring_rpc.o 00:05:03.376 CC module/keyring/file/keyring.o 00:05:03.376 CC module/accel/iaa/accel_iaa.o 00:05:03.376 CC module/keyring/file/keyring_rpc.o 00:05:03.376 CC module/accel/iaa/accel_iaa_rpc.o 00:05:03.376 CC module/fsdev/aio/fsdev_aio.o 00:05:03.376 CC module/accel/ioat/accel_ioat.o 00:05:03.376 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:03.376 CC module/fsdev/aio/linux_aio_mgr.o 00:05:03.376 CC module/accel/ioat/accel_ioat_rpc.o 00:05:03.376 CC module/blob/bdev/blob_bdev.o 00:05:03.698 LIB libspdk_env_dpdk_rpc.a 00:05:03.698 SO libspdk_env_dpdk_rpc.so.6.0 00:05:03.698 SYMLINK libspdk_env_dpdk_rpc.so 00:05:03.698 LIB libspdk_scheduler_gscheduler.a 00:05:03.698 LIB libspdk_scheduler_dpdk_governor.a 00:05:03.698 SO libspdk_scheduler_gscheduler.so.4.0 00:05:03.698 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:03.698 LIB libspdk_accel_error.a 00:05:03.698 LIB libspdk_scheduler_dynamic.a 00:05:03.698 LIB libspdk_keyring_linux.a 00:05:03.698 LIB libspdk_keyring_file.a 00:05:03.698 LIB libspdk_accel_iaa.a 00:05:03.698 LIB libspdk_accel_ioat.a 00:05:03.698 SYMLINK libspdk_scheduler_gscheduler.so 00:05:03.698 SO libspdk_scheduler_dynamic.so.4.0 00:05:03.698 SO libspdk_keyring_linux.so.1.0 00:05:03.698 SO libspdk_accel_error.so.2.0 00:05:03.698 SO libspdk_keyring_file.so.2.0 00:05:03.698 SO libspdk_accel_iaa.so.3.0 00:05:03.698 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:03.698 SO libspdk_accel_ioat.so.6.0 00:05:03.698 SYMLINK libspdk_scheduler_dynamic.so 00:05:03.698 SYMLINK libspdk_keyring_linux.so 00:05:03.698 SYMLINK libspdk_accel_error.so 00:05:03.698 SYMLINK libspdk_keyring_file.so 00:05:03.698 SYMLINK libspdk_accel_iaa.so 00:05:03.698 LIB libspdk_accel_dsa.a 00:05:03.698 SYMLINK libspdk_accel_ioat.so 00:05:03.956 LIB libspdk_blob_bdev.a 00:05:03.956 SO libspdk_accel_dsa.so.5.0 00:05:03.956 SO libspdk_blob_bdev.so.12.0 00:05:03.956 SYMLINK libspdk_accel_dsa.so 00:05:03.956 SYMLINK libspdk_blob_bdev.so 00:05:03.956 LIB libspdk_vfu_device.a 00:05:04.215 SO libspdk_vfu_device.so.3.0 00:05:04.216 CC module/bdev/delay/vbdev_delay.o 00:05:04.216 CC module/bdev/error/vbdev_error.o 00:05:04.216 CC module/bdev/lvol/vbdev_lvol.o 00:05:04.216 CC module/bdev/error/vbdev_error_rpc.o 00:05:04.216 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:04.216 CC module/bdev/gpt/gpt.o 00:05:04.216 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:04.216 CC module/bdev/gpt/vbdev_gpt.o 00:05:04.216 CC module/blobfs/bdev/blobfs_bdev.o 00:05:04.216 CC module/bdev/null/bdev_null.o 00:05:04.216 CC module/bdev/malloc/bdev_malloc.o 00:05:04.216 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:04.216 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:04.216 CC module/bdev/null/bdev_null_rpc.o 00:05:04.216 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:04.216 CC module/bdev/ftl/bdev_ftl.o 00:05:04.216 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:04.216 CC module/bdev/nvme/bdev_nvme.o 00:05:04.216 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:04.216 CC module/bdev/aio/bdev_aio.o 00:05:04.216 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:04.216 CC module/bdev/aio/bdev_aio_rpc.o 00:05:04.216 CC module/bdev/nvme/nvme_rpc.o 00:05:04.216 CC module/bdev/raid/bdev_raid.o 00:05:04.216 CC module/bdev/nvme/bdev_mdns_client.o 00:05:04.216 CC module/bdev/split/vbdev_split.o 00:05:04.216 CC module/bdev/nvme/vbdev_opal.o 00:05:04.216 CC module/bdev/raid/bdev_raid_rpc.o 00:05:04.216 CC module/bdev/raid/bdev_raid_sb.o 00:05:04.216 CC module/bdev/split/vbdev_split_rpc.o 00:05:04.216 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:04.216 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:04.216 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:04.216 CC module/bdev/raid/raid0.o 00:05:04.216 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:04.216 CC module/bdev/raid/raid1.o 00:05:04.216 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:04.216 CC module/bdev/raid/concat.o 00:05:04.216 CC module/bdev/passthru/vbdev_passthru.o 00:05:04.216 CC module/bdev/iscsi/bdev_iscsi.o 00:05:04.216 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:04.216 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:04.216 SYMLINK libspdk_vfu_device.so 00:05:04.216 LIB libspdk_fsdev_aio.a 00:05:04.216 SO libspdk_fsdev_aio.so.1.0 00:05:04.474 LIB libspdk_sock_posix.a 00:05:04.474 SYMLINK libspdk_fsdev_aio.so 00:05:04.474 SO libspdk_sock_posix.so.6.0 00:05:04.474 LIB libspdk_bdev_split.a 00:05:04.474 SYMLINK libspdk_sock_posix.so 00:05:04.474 SO libspdk_bdev_split.so.6.0 00:05:04.474 LIB libspdk_blobfs_bdev.a 00:05:04.474 LIB libspdk_bdev_passthru.a 00:05:04.474 SO libspdk_blobfs_bdev.so.6.0 00:05:04.474 SO libspdk_bdev_passthru.so.6.0 00:05:04.732 SYMLINK libspdk_bdev_split.so 00:05:04.732 LIB libspdk_bdev_error.a 00:05:04.732 SO libspdk_bdev_error.so.6.0 00:05:04.732 SYMLINK libspdk_blobfs_bdev.so 00:05:04.732 LIB libspdk_bdev_gpt.a 00:05:04.732 SYMLINK libspdk_bdev_passthru.so 00:05:04.732 LIB libspdk_bdev_ftl.a 00:05:04.732 SO libspdk_bdev_gpt.so.6.0 00:05:04.732 LIB libspdk_bdev_null.a 00:05:04.732 SO libspdk_bdev_ftl.so.6.0 00:05:04.732 SYMLINK libspdk_bdev_error.so 00:05:04.732 SO libspdk_bdev_null.so.6.0 00:05:04.732 SYMLINK libspdk_bdev_gpt.so 00:05:04.732 LIB libspdk_bdev_delay.a 00:05:04.732 LIB libspdk_bdev_aio.a 00:05:04.732 SYMLINK libspdk_bdev_ftl.so 00:05:04.732 SYMLINK libspdk_bdev_null.so 00:05:04.732 LIB libspdk_bdev_zone_block.a 00:05:04.732 SO libspdk_bdev_aio.so.6.0 00:05:04.732 SO libspdk_bdev_delay.so.6.0 00:05:04.732 SO libspdk_bdev_zone_block.so.6.0 00:05:04.732 LIB libspdk_bdev_iscsi.a 00:05:04.732 LIB libspdk_bdev_malloc.a 00:05:04.732 SO libspdk_bdev_iscsi.so.6.0 00:05:04.732 SYMLINK libspdk_bdev_delay.so 00:05:04.732 SO libspdk_bdev_malloc.so.6.0 00:05:04.732 SYMLINK libspdk_bdev_aio.so 00:05:04.732 SYMLINK libspdk_bdev_zone_block.so 00:05:04.990 SYMLINK libspdk_bdev_iscsi.so 00:05:04.990 SYMLINK libspdk_bdev_malloc.so 00:05:04.990 LIB libspdk_bdev_lvol.a 00:05:04.990 SO libspdk_bdev_lvol.so.6.0 00:05:04.990 LIB libspdk_bdev_virtio.a 00:05:04.990 SO libspdk_bdev_virtio.so.6.0 00:05:04.990 SYMLINK libspdk_bdev_lvol.so 00:05:04.990 SYMLINK libspdk_bdev_virtio.so 00:05:05.552 LIB libspdk_bdev_raid.a 00:05:05.552 SO libspdk_bdev_raid.so.6.0 00:05:05.552 SYMLINK libspdk_bdev_raid.so 00:05:06.924 LIB libspdk_bdev_nvme.a 00:05:06.924 SO libspdk_bdev_nvme.so.7.1 00:05:06.924 SYMLINK libspdk_bdev_nvme.so 00:05:07.490 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:07.490 CC module/event/subsystems/vmd/vmd.o 00:05:07.490 CC module/event/subsystems/fsdev/fsdev.o 00:05:07.490 CC module/event/subsystems/iobuf/iobuf.o 00:05:07.490 CC module/event/subsystems/sock/sock.o 00:05:07.490 CC module/event/subsystems/scheduler/scheduler.o 00:05:07.490 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:07.490 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:07.490 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:07.490 CC module/event/subsystems/keyring/keyring.o 00:05:07.490 LIB libspdk_event_keyring.a 00:05:07.490 LIB libspdk_event_vhost_blk.a 00:05:07.490 LIB libspdk_event_fsdev.a 00:05:07.490 LIB libspdk_event_vmd.a 00:05:07.490 LIB libspdk_event_vfu_tgt.a 00:05:07.490 LIB libspdk_event_scheduler.a 00:05:07.490 LIB libspdk_event_sock.a 00:05:07.490 SO libspdk_event_keyring.so.1.0 00:05:07.490 SO libspdk_event_vhost_blk.so.3.0 00:05:07.490 SO libspdk_event_fsdev.so.1.0 00:05:07.490 LIB libspdk_event_iobuf.a 00:05:07.490 SO libspdk_event_scheduler.so.4.0 00:05:07.490 SO libspdk_event_vfu_tgt.so.3.0 00:05:07.490 SO libspdk_event_sock.so.5.0 00:05:07.490 SO libspdk_event_vmd.so.6.0 00:05:07.490 SO libspdk_event_iobuf.so.3.0 00:05:07.490 SYMLINK libspdk_event_keyring.so 00:05:07.490 SYMLINK libspdk_event_vhost_blk.so 00:05:07.490 SYMLINK libspdk_event_fsdev.so 00:05:07.490 SYMLINK libspdk_event_vfu_tgt.so 00:05:07.490 SYMLINK libspdk_event_scheduler.so 00:05:07.490 SYMLINK libspdk_event_sock.so 00:05:07.749 SYMLINK libspdk_event_vmd.so 00:05:07.750 SYMLINK libspdk_event_iobuf.so 00:05:07.750 CC module/event/subsystems/accel/accel.o 00:05:08.009 LIB libspdk_event_accel.a 00:05:08.009 SO libspdk_event_accel.so.6.0 00:05:08.009 SYMLINK libspdk_event_accel.so 00:05:08.267 CC module/event/subsystems/bdev/bdev.o 00:05:08.267 LIB libspdk_event_bdev.a 00:05:08.524 SO libspdk_event_bdev.so.6.0 00:05:08.524 SYMLINK libspdk_event_bdev.so 00:05:08.524 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:08.524 CC module/event/subsystems/scsi/scsi.o 00:05:08.524 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:08.524 CC module/event/subsystems/nbd/nbd.o 00:05:08.524 CC module/event/subsystems/ublk/ublk.o 00:05:08.782 LIB libspdk_event_ublk.a 00:05:08.782 LIB libspdk_event_nbd.a 00:05:08.782 LIB libspdk_event_scsi.a 00:05:08.782 SO libspdk_event_ublk.so.3.0 00:05:08.782 SO libspdk_event_nbd.so.6.0 00:05:08.782 SO libspdk_event_scsi.so.6.0 00:05:08.782 SYMLINK libspdk_event_ublk.so 00:05:08.782 SYMLINK libspdk_event_nbd.so 00:05:08.782 SYMLINK libspdk_event_scsi.so 00:05:08.782 LIB libspdk_event_nvmf.a 00:05:08.782 SO libspdk_event_nvmf.so.6.0 00:05:09.039 SYMLINK libspdk_event_nvmf.so 00:05:09.039 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:09.039 CC module/event/subsystems/iscsi/iscsi.o 00:05:09.296 LIB libspdk_event_vhost_scsi.a 00:05:09.296 SO libspdk_event_vhost_scsi.so.3.0 00:05:09.296 LIB libspdk_event_iscsi.a 00:05:09.296 SO libspdk_event_iscsi.so.6.0 00:05:09.296 SYMLINK libspdk_event_vhost_scsi.so 00:05:09.296 SYMLINK libspdk_event_iscsi.so 00:05:09.296 SO libspdk.so.6.0 00:05:09.296 SYMLINK libspdk.so 00:05:09.561 CC app/trace_record/trace_record.o 00:05:09.561 CC app/spdk_nvme_discover/discovery_aer.o 00:05:09.561 CC app/spdk_nvme_perf/perf.o 00:05:09.561 CC app/spdk_lspci/spdk_lspci.o 00:05:09.561 CXX app/trace/trace.o 00:05:09.561 CC app/spdk_top/spdk_top.o 00:05:09.561 CC app/spdk_nvme_identify/identify.o 00:05:09.561 CC test/rpc_client/rpc_client_test.o 00:05:09.561 TEST_HEADER include/spdk/accel.h 00:05:09.561 TEST_HEADER include/spdk/accel_module.h 00:05:09.561 TEST_HEADER include/spdk/assert.h 00:05:09.561 TEST_HEADER include/spdk/barrier.h 00:05:09.561 TEST_HEADER include/spdk/base64.h 00:05:09.561 TEST_HEADER include/spdk/bdev.h 00:05:09.561 TEST_HEADER include/spdk/bdev_module.h 00:05:09.561 TEST_HEADER include/spdk/bdev_zone.h 00:05:09.561 TEST_HEADER include/spdk/bit_array.h 00:05:09.561 TEST_HEADER include/spdk/bit_pool.h 00:05:09.561 TEST_HEADER include/spdk/blob_bdev.h 00:05:09.561 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:09.561 TEST_HEADER include/spdk/blobfs.h 00:05:09.561 TEST_HEADER include/spdk/blob.h 00:05:09.561 TEST_HEADER include/spdk/conf.h 00:05:09.561 TEST_HEADER include/spdk/config.h 00:05:09.561 TEST_HEADER include/spdk/cpuset.h 00:05:09.561 TEST_HEADER include/spdk/crc16.h 00:05:09.561 TEST_HEADER include/spdk/crc32.h 00:05:09.561 TEST_HEADER include/spdk/crc64.h 00:05:09.561 TEST_HEADER include/spdk/dif.h 00:05:09.561 TEST_HEADER include/spdk/endian.h 00:05:09.561 TEST_HEADER include/spdk/dma.h 00:05:09.561 TEST_HEADER include/spdk/env_dpdk.h 00:05:09.561 TEST_HEADER include/spdk/env.h 00:05:09.561 TEST_HEADER include/spdk/event.h 00:05:09.561 TEST_HEADER include/spdk/fd_group.h 00:05:09.561 TEST_HEADER include/spdk/fd.h 00:05:09.561 TEST_HEADER include/spdk/file.h 00:05:09.561 TEST_HEADER include/spdk/fsdev.h 00:05:09.561 TEST_HEADER include/spdk/ftl.h 00:05:09.561 TEST_HEADER include/spdk/fsdev_module.h 00:05:09.561 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:09.561 TEST_HEADER include/spdk/gpt_spec.h 00:05:09.561 TEST_HEADER include/spdk/hexlify.h 00:05:09.561 TEST_HEADER include/spdk/histogram_data.h 00:05:09.561 TEST_HEADER include/spdk/idxd.h 00:05:09.561 TEST_HEADER include/spdk/idxd_spec.h 00:05:09.561 TEST_HEADER include/spdk/init.h 00:05:09.561 TEST_HEADER include/spdk/ioat.h 00:05:09.561 TEST_HEADER include/spdk/ioat_spec.h 00:05:09.561 TEST_HEADER include/spdk/iscsi_spec.h 00:05:09.561 TEST_HEADER include/spdk/json.h 00:05:09.561 TEST_HEADER include/spdk/jsonrpc.h 00:05:09.561 TEST_HEADER include/spdk/keyring.h 00:05:09.561 TEST_HEADER include/spdk/keyring_module.h 00:05:09.561 TEST_HEADER include/spdk/likely.h 00:05:09.561 TEST_HEADER include/spdk/log.h 00:05:09.561 TEST_HEADER include/spdk/lvol.h 00:05:09.561 TEST_HEADER include/spdk/md5.h 00:05:09.561 TEST_HEADER include/spdk/mmio.h 00:05:09.561 TEST_HEADER include/spdk/memory.h 00:05:09.561 TEST_HEADER include/spdk/nbd.h 00:05:09.561 TEST_HEADER include/spdk/net.h 00:05:09.561 TEST_HEADER include/spdk/nvme.h 00:05:09.561 TEST_HEADER include/spdk/notify.h 00:05:09.561 TEST_HEADER include/spdk/nvme_intel.h 00:05:09.562 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:09.562 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:09.562 TEST_HEADER include/spdk/nvme_spec.h 00:05:09.562 TEST_HEADER include/spdk/nvme_zns.h 00:05:09.562 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:09.562 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:09.562 TEST_HEADER include/spdk/nvmf.h 00:05:09.562 TEST_HEADER include/spdk/nvmf_spec.h 00:05:09.562 TEST_HEADER include/spdk/nvmf_transport.h 00:05:09.562 TEST_HEADER include/spdk/opal.h 00:05:09.562 TEST_HEADER include/spdk/opal_spec.h 00:05:09.562 TEST_HEADER include/spdk/pci_ids.h 00:05:09.562 TEST_HEADER include/spdk/pipe.h 00:05:09.562 TEST_HEADER include/spdk/queue.h 00:05:09.562 TEST_HEADER include/spdk/reduce.h 00:05:09.562 TEST_HEADER include/spdk/rpc.h 00:05:09.562 TEST_HEADER include/spdk/scheduler.h 00:05:09.562 TEST_HEADER include/spdk/scsi_spec.h 00:05:09.562 TEST_HEADER include/spdk/scsi.h 00:05:09.562 TEST_HEADER include/spdk/sock.h 00:05:09.562 TEST_HEADER include/spdk/stdinc.h 00:05:09.562 TEST_HEADER include/spdk/string.h 00:05:09.562 TEST_HEADER include/spdk/trace.h 00:05:09.562 TEST_HEADER include/spdk/thread.h 00:05:09.562 TEST_HEADER include/spdk/trace_parser.h 00:05:09.562 TEST_HEADER include/spdk/ublk.h 00:05:09.562 TEST_HEADER include/spdk/tree.h 00:05:09.562 TEST_HEADER include/spdk/util.h 00:05:09.562 TEST_HEADER include/spdk/uuid.h 00:05:09.562 TEST_HEADER include/spdk/version.h 00:05:09.562 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:09.562 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:09.562 TEST_HEADER include/spdk/vhost.h 00:05:09.562 TEST_HEADER include/spdk/vmd.h 00:05:09.562 TEST_HEADER include/spdk/xor.h 00:05:09.562 TEST_HEADER include/spdk/zipf.h 00:05:09.562 CXX test/cpp_headers/accel.o 00:05:09.562 CXX test/cpp_headers/accel_module.o 00:05:09.562 CC app/spdk_dd/spdk_dd.o 00:05:09.562 CXX test/cpp_headers/assert.o 00:05:09.562 CXX test/cpp_headers/barrier.o 00:05:09.562 CXX test/cpp_headers/base64.o 00:05:09.562 CXX test/cpp_headers/bdev.o 00:05:09.562 CXX test/cpp_headers/bdev_module.o 00:05:09.562 CXX test/cpp_headers/bdev_zone.o 00:05:09.562 CXX test/cpp_headers/bit_array.o 00:05:09.562 CXX test/cpp_headers/bit_pool.o 00:05:09.562 CXX test/cpp_headers/blob_bdev.o 00:05:09.562 CXX test/cpp_headers/blobfs_bdev.o 00:05:09.562 CXX test/cpp_headers/blobfs.o 00:05:09.562 CXX test/cpp_headers/blob.o 00:05:09.562 CXX test/cpp_headers/conf.o 00:05:09.562 CXX test/cpp_headers/config.o 00:05:09.562 CXX test/cpp_headers/cpuset.o 00:05:09.562 CXX test/cpp_headers/crc16.o 00:05:09.562 CC app/iscsi_tgt/iscsi_tgt.o 00:05:09.562 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:09.562 CC app/nvmf_tgt/nvmf_main.o 00:05:09.823 CXX test/cpp_headers/crc32.o 00:05:09.823 CC examples/ioat/verify/verify.o 00:05:09.823 CC examples/util/zipf/zipf.o 00:05:09.823 CC app/spdk_tgt/spdk_tgt.o 00:05:09.823 CC examples/ioat/perf/perf.o 00:05:09.823 CC test/env/vtophys/vtophys.o 00:05:09.823 CC test/env/memory/memory_ut.o 00:05:09.823 CC test/app/jsoncat/jsoncat.o 00:05:09.823 CC test/app/histogram_perf/histogram_perf.o 00:05:09.823 CC test/env/pci/pci_ut.o 00:05:09.823 CC test/thread/poller_perf/poller_perf.o 00:05:09.823 CC test/app/stub/stub.o 00:05:09.823 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:09.823 CC app/fio/nvme/fio_plugin.o 00:05:09.823 CC test/dma/test_dma/test_dma.o 00:05:09.823 CC test/app/bdev_svc/bdev_svc.o 00:05:09.823 CC app/fio/bdev/fio_plugin.o 00:05:09.823 CC test/env/mem_callbacks/mem_callbacks.o 00:05:09.823 LINK spdk_lspci 00:05:09.823 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:10.085 LINK rpc_client_test 00:05:10.085 LINK spdk_nvme_discover 00:05:10.085 LINK vtophys 00:05:10.085 LINK jsoncat 00:05:10.085 LINK histogram_perf 00:05:10.085 LINK zipf 00:05:10.085 LINK poller_perf 00:05:10.085 CXX test/cpp_headers/crc64.o 00:05:10.085 CXX test/cpp_headers/dif.o 00:05:10.085 LINK spdk_trace_record 00:05:10.085 CXX test/cpp_headers/dma.o 00:05:10.085 CXX test/cpp_headers/env_dpdk.o 00:05:10.085 CXX test/cpp_headers/endian.o 00:05:10.085 LINK nvmf_tgt 00:05:10.085 LINK interrupt_tgt 00:05:10.085 CXX test/cpp_headers/env.o 00:05:10.085 CXX test/cpp_headers/event.o 00:05:10.085 LINK env_dpdk_post_init 00:05:10.085 CXX test/cpp_headers/fd_group.o 00:05:10.085 CXX test/cpp_headers/fd.o 00:05:10.085 CXX test/cpp_headers/file.o 00:05:10.085 CXX test/cpp_headers/fsdev.o 00:05:10.085 LINK iscsi_tgt 00:05:10.085 LINK stub 00:05:10.085 CXX test/cpp_headers/fsdev_module.o 00:05:10.085 CXX test/cpp_headers/ftl.o 00:05:10.085 CXX test/cpp_headers/fuse_dispatcher.o 00:05:10.085 LINK verify 00:05:10.085 CXX test/cpp_headers/gpt_spec.o 00:05:10.346 LINK ioat_perf 00:05:10.346 LINK bdev_svc 00:05:10.346 CXX test/cpp_headers/hexlify.o 00:05:10.346 CXX test/cpp_headers/histogram_data.o 00:05:10.346 LINK spdk_tgt 00:05:10.346 CXX test/cpp_headers/idxd.o 00:05:10.346 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:10.346 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:10.346 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:10.346 CXX test/cpp_headers/idxd_spec.o 00:05:10.346 CXX test/cpp_headers/init.o 00:05:10.346 CXX test/cpp_headers/ioat.o 00:05:10.607 LINK spdk_dd 00:05:10.607 LINK spdk_trace 00:05:10.607 CXX test/cpp_headers/ioat_spec.o 00:05:10.607 CXX test/cpp_headers/iscsi_spec.o 00:05:10.607 CXX test/cpp_headers/json.o 00:05:10.607 CXX test/cpp_headers/jsonrpc.o 00:05:10.607 CXX test/cpp_headers/keyring.o 00:05:10.607 CXX test/cpp_headers/keyring_module.o 00:05:10.607 CXX test/cpp_headers/likely.o 00:05:10.607 CXX test/cpp_headers/log.o 00:05:10.607 LINK pci_ut 00:05:10.607 CXX test/cpp_headers/lvol.o 00:05:10.607 CXX test/cpp_headers/md5.o 00:05:10.607 CXX test/cpp_headers/memory.o 00:05:10.607 CXX test/cpp_headers/mmio.o 00:05:10.607 CXX test/cpp_headers/nbd.o 00:05:10.607 CXX test/cpp_headers/net.o 00:05:10.607 CXX test/cpp_headers/notify.o 00:05:10.607 CXX test/cpp_headers/nvme.o 00:05:10.607 CXX test/cpp_headers/nvme_intel.o 00:05:10.607 CXX test/cpp_headers/nvme_ocssd.o 00:05:10.607 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:10.607 CXX test/cpp_headers/nvme_spec.o 00:05:10.607 CXX test/cpp_headers/nvme_zns.o 00:05:10.607 CXX test/cpp_headers/nvmf_cmd.o 00:05:10.869 CC test/event/reactor_perf/reactor_perf.o 00:05:10.869 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:10.869 CC test/event/reactor/reactor.o 00:05:10.869 CC test/event/event_perf/event_perf.o 00:05:10.869 LINK nvme_fuzz 00:05:10.869 LINK spdk_bdev 00:05:10.869 CC test/event/app_repeat/app_repeat.o 00:05:10.869 CC examples/thread/thread/thread_ex.o 00:05:10.869 CXX test/cpp_headers/nvmf.o 00:05:10.869 LINK test_dma 00:05:10.869 CC test/event/scheduler/scheduler.o 00:05:10.869 CXX test/cpp_headers/nvmf_spec.o 00:05:10.869 CC examples/idxd/perf/perf.o 00:05:10.869 CXX test/cpp_headers/nvmf_transport.o 00:05:10.869 LINK spdk_nvme 00:05:10.869 CC examples/sock/hello_world/hello_sock.o 00:05:10.869 CC examples/vmd/lsvmd/lsvmd.o 00:05:10.869 CC examples/vmd/led/led.o 00:05:10.869 CXX test/cpp_headers/opal.o 00:05:10.869 CXX test/cpp_headers/opal_spec.o 00:05:10.869 CXX test/cpp_headers/pci_ids.o 00:05:10.869 CXX test/cpp_headers/pipe.o 00:05:11.129 CXX test/cpp_headers/queue.o 00:05:11.129 CXX test/cpp_headers/reduce.o 00:05:11.129 CXX test/cpp_headers/rpc.o 00:05:11.129 CXX test/cpp_headers/scheduler.o 00:05:11.129 CXX test/cpp_headers/scsi.o 00:05:11.129 CXX test/cpp_headers/scsi_spec.o 00:05:11.129 CXX test/cpp_headers/sock.o 00:05:11.129 CXX test/cpp_headers/stdinc.o 00:05:11.129 CXX test/cpp_headers/string.o 00:05:11.129 CXX test/cpp_headers/thread.o 00:05:11.129 LINK reactor 00:05:11.129 CXX test/cpp_headers/trace.o 00:05:11.129 CXX test/cpp_headers/trace_parser.o 00:05:11.129 CXX test/cpp_headers/tree.o 00:05:11.129 CXX test/cpp_headers/ublk.o 00:05:11.129 LINK reactor_perf 00:05:11.129 CXX test/cpp_headers/util.o 00:05:11.129 CXX test/cpp_headers/uuid.o 00:05:11.129 LINK mem_callbacks 00:05:11.129 CXX test/cpp_headers/version.o 00:05:11.129 CXX test/cpp_headers/vfio_user_pci.o 00:05:11.129 LINK event_perf 00:05:11.129 CXX test/cpp_headers/vfio_user_spec.o 00:05:11.129 CC app/vhost/vhost.o 00:05:11.129 CXX test/cpp_headers/vhost.o 00:05:11.129 LINK spdk_nvme_perf 00:05:11.129 CXX test/cpp_headers/vmd.o 00:05:11.129 CXX test/cpp_headers/xor.o 00:05:11.129 LINK app_repeat 00:05:11.129 LINK lsvmd 00:05:11.129 CXX test/cpp_headers/zipf.o 00:05:11.390 LINK led 00:05:11.390 LINK spdk_nvme_identify 00:05:11.390 LINK vhost_fuzz 00:05:11.390 LINK thread 00:05:11.390 LINK spdk_top 00:05:11.390 LINK scheduler 00:05:11.390 LINK hello_sock 00:05:11.648 LINK idxd_perf 00:05:11.648 CC test/nvme/reset/reset.o 00:05:11.648 CC test/nvme/err_injection/err_injection.o 00:05:11.648 CC test/nvme/simple_copy/simple_copy.o 00:05:11.648 CC test/nvme/aer/aer.o 00:05:11.648 CC test/nvme/startup/startup.o 00:05:11.648 CC test/nvme/e2edp/nvme_dp.o 00:05:11.648 CC test/nvme/connect_stress/connect_stress.o 00:05:11.648 CC test/nvme/sgl/sgl.o 00:05:11.648 CC test/nvme/overhead/overhead.o 00:05:11.648 CC test/nvme/compliance/nvme_compliance.o 00:05:11.648 CC test/nvme/reserve/reserve.o 00:05:11.648 CC test/nvme/boot_partition/boot_partition.o 00:05:11.648 LINK vhost 00:05:11.648 CC test/nvme/fused_ordering/fused_ordering.o 00:05:11.648 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:11.648 CC test/nvme/cuse/cuse.o 00:05:11.648 CC test/nvme/fdp/fdp.o 00:05:11.648 CC test/blobfs/mkfs/mkfs.o 00:05:11.648 CC test/accel/dif/dif.o 00:05:11.648 CC test/lvol/esnap/esnap.o 00:05:11.906 LINK boot_partition 00:05:11.906 LINK connect_stress 00:05:11.906 LINK err_injection 00:05:11.906 LINK reserve 00:05:11.906 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:11.906 CC examples/nvme/reconnect/reconnect.o 00:05:11.906 LINK fused_ordering 00:05:11.906 CC examples/nvme/arbitration/arbitration.o 00:05:11.906 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:11.906 LINK simple_copy 00:05:11.906 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:11.906 CC examples/nvme/abort/abort.o 00:05:11.906 CC examples/nvme/hotplug/hotplug.o 00:05:11.906 CC examples/nvme/hello_world/hello_world.o 00:05:11.906 LINK startup 00:05:11.906 LINK reset 00:05:11.906 LINK nvme_dp 00:05:11.906 LINK aer 00:05:11.906 LINK sgl 00:05:11.906 CC examples/accel/perf/accel_perf.o 00:05:11.906 LINK memory_ut 00:05:11.906 LINK doorbell_aers 00:05:11.906 LINK mkfs 00:05:12.165 LINK overhead 00:05:12.165 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:12.165 CC examples/blob/cli/blobcli.o 00:05:12.165 LINK fdp 00:05:12.165 CC examples/blob/hello_world/hello_blob.o 00:05:12.165 LINK nvme_compliance 00:05:12.165 LINK pmr_persistence 00:05:12.165 LINK hotplug 00:05:12.165 LINK cmb_copy 00:05:12.165 LINK hello_world 00:05:12.423 LINK reconnect 00:05:12.423 LINK arbitration 00:05:12.423 LINK hello_fsdev 00:05:12.423 LINK abort 00:05:12.423 LINK nvme_manage 00:05:12.423 LINK hello_blob 00:05:12.423 LINK dif 00:05:12.681 LINK accel_perf 00:05:12.681 LINK blobcli 00:05:12.939 CC test/bdev/bdevio/bdevio.o 00:05:12.939 LINK iscsi_fuzz 00:05:12.939 CC examples/bdev/hello_world/hello_bdev.o 00:05:12.939 CC examples/bdev/bdevperf/bdevperf.o 00:05:13.197 LINK hello_bdev 00:05:13.455 LINK bdevio 00:05:13.455 LINK cuse 00:05:13.713 LINK bdevperf 00:05:14.280 CC examples/nvmf/nvmf/nvmf.o 00:05:14.538 LINK nvmf 00:05:17.077 LINK esnap 00:05:17.077 00:05:17.077 real 1m9.983s 00:05:17.077 user 11m56.301s 00:05:17.077 sys 2m38.347s 00:05:17.077 06:09:07 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:17.077 06:09:07 make -- common/autotest_common.sh@10 -- $ set +x 00:05:17.077 ************************************ 00:05:17.077 END TEST make 00:05:17.077 ************************************ 00:05:17.336 06:09:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:17.336 06:09:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:17.336 06:09:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:17.336 06:09:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.336 06:09:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:17.336 06:09:07 -- pm/common@44 -- $ pid=865757 00:05:17.336 06:09:07 -- pm/common@50 -- $ kill -TERM 865757 00:05:17.336 06:09:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.336 06:09:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:17.336 06:09:07 -- pm/common@44 -- $ pid=865759 00:05:17.336 06:09:07 -- pm/common@50 -- $ kill -TERM 865759 00:05:17.336 06:09:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.336 06:09:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:17.336 06:09:07 -- pm/common@44 -- $ pid=865761 00:05:17.336 06:09:07 -- pm/common@50 -- $ kill -TERM 865761 00:05:17.336 06:09:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.336 06:09:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:17.336 06:09:07 -- pm/common@44 -- $ pid=865795 00:05:17.336 06:09:07 -- pm/common@50 -- $ sudo -E kill -TERM 865795 00:05:17.336 06:09:07 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:17.336 06:09:07 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:17.336 06:09:07 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:17.336 06:09:07 -- common/autotest_common.sh@1711 -- # lcov --version 00:05:17.336 06:09:07 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:17.336 06:09:07 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:17.336 06:09:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.336 06:09:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.336 06:09:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.336 06:09:07 -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.336 06:09:07 -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.336 06:09:07 -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.336 06:09:07 -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.336 06:09:07 -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.336 06:09:07 -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.336 06:09:07 -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.336 06:09:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.336 06:09:07 -- scripts/common.sh@344 -- # case "$op" in 00:05:17.336 06:09:07 -- scripts/common.sh@345 -- # : 1 00:05:17.336 06:09:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.336 06:09:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.336 06:09:07 -- scripts/common.sh@365 -- # decimal 1 00:05:17.336 06:09:07 -- scripts/common.sh@353 -- # local d=1 00:05:17.336 06:09:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.336 06:09:07 -- scripts/common.sh@355 -- # echo 1 00:05:17.336 06:09:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.336 06:09:07 -- scripts/common.sh@366 -- # decimal 2 00:05:17.336 06:09:07 -- scripts/common.sh@353 -- # local d=2 00:05:17.336 06:09:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.336 06:09:07 -- scripts/common.sh@355 -- # echo 2 00:05:17.336 06:09:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.336 06:09:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.336 06:09:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.336 06:09:07 -- scripts/common.sh@368 -- # return 0 00:05:17.336 06:09:07 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.336 06:09:07 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:17.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.336 --rc genhtml_branch_coverage=1 00:05:17.336 --rc genhtml_function_coverage=1 00:05:17.336 --rc genhtml_legend=1 00:05:17.336 --rc geninfo_all_blocks=1 00:05:17.336 --rc geninfo_unexecuted_blocks=1 00:05:17.336 00:05:17.336 ' 00:05:17.336 06:09:07 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:17.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.337 --rc genhtml_branch_coverage=1 00:05:17.337 --rc genhtml_function_coverage=1 00:05:17.337 --rc genhtml_legend=1 00:05:17.337 --rc geninfo_all_blocks=1 00:05:17.337 --rc geninfo_unexecuted_blocks=1 00:05:17.337 00:05:17.337 ' 00:05:17.337 06:09:07 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:17.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.337 --rc genhtml_branch_coverage=1 00:05:17.337 --rc genhtml_function_coverage=1 00:05:17.337 --rc genhtml_legend=1 00:05:17.337 --rc geninfo_all_blocks=1 00:05:17.337 --rc geninfo_unexecuted_blocks=1 00:05:17.337 00:05:17.337 ' 00:05:17.337 06:09:07 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:17.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.337 --rc genhtml_branch_coverage=1 00:05:17.337 --rc genhtml_function_coverage=1 00:05:17.337 --rc genhtml_legend=1 00:05:17.337 --rc geninfo_all_blocks=1 00:05:17.337 --rc geninfo_unexecuted_blocks=1 00:05:17.337 00:05:17.337 ' 00:05:17.337 06:09:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.337 06:09:07 -- nvmf/common.sh@7 -- # uname -s 00:05:17.337 06:09:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.337 06:09:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.337 06:09:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.337 06:09:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.337 06:09:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.337 06:09:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.337 06:09:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.337 06:09:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.337 06:09:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.337 06:09:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.337 06:09:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:17.337 06:09:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:17.337 06:09:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.337 06:09:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.337 06:09:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:17.337 06:09:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.337 06:09:07 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:17.337 06:09:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:17.337 06:09:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.337 06:09:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.337 06:09:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.337 06:09:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.337 06:09:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.337 06:09:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.337 06:09:07 -- paths/export.sh@5 -- # export PATH 00:05:17.337 06:09:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.337 06:09:07 -- nvmf/common.sh@51 -- # : 0 00:05:17.337 06:09:07 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:17.337 06:09:07 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:17.337 06:09:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.337 06:09:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.337 06:09:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.337 06:09:07 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:17.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:17.337 06:09:07 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:17.337 06:09:07 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:17.337 06:09:07 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:17.337 06:09:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:17.337 06:09:07 -- spdk/autotest.sh@32 -- # uname -s 00:05:17.337 06:09:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:17.337 06:09:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:17.337 06:09:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:17.337 06:09:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:17.337 06:09:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:17.337 06:09:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:17.337 06:09:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:17.337 06:09:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:17.337 06:09:07 -- spdk/autotest.sh@48 -- # udevadm_pid=925295 00:05:17.337 06:09:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:17.337 06:09:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:17.337 06:09:07 -- pm/common@17 -- # local monitor 00:05:17.337 06:09:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.337 06:09:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.337 06:09:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.337 06:09:07 -- pm/common@21 -- # date +%s 00:05:17.337 06:09:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.337 06:09:07 -- pm/common@21 -- # date +%s 00:05:17.337 06:09:07 -- pm/common@25 -- # sleep 1 00:05:17.337 06:09:07 -- pm/common@21 -- # date +%s 00:05:17.337 06:09:07 -- pm/common@21 -- # date +%s 00:05:17.337 06:09:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733634547 00:05:17.337 06:09:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733634547 00:05:17.337 06:09:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733634547 00:05:17.337 06:09:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733634547 00:05:17.337 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733634547_collect-vmstat.pm.log 00:05:17.337 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733634547_collect-cpu-load.pm.log 00:05:17.337 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733634547_collect-cpu-temp.pm.log 00:05:17.337 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733634547_collect-bmc-pm.bmc.pm.log 00:05:18.716 06:09:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:18.716 06:09:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:18.716 06:09:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:18.716 06:09:08 -- common/autotest_common.sh@10 -- # set +x 00:05:18.716 06:09:08 -- spdk/autotest.sh@59 -- # create_test_list 00:05:18.716 06:09:08 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:18.716 06:09:08 -- common/autotest_common.sh@10 -- # set +x 00:05:18.716 06:09:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:18.716 06:09:08 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:18.716 06:09:08 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:18.716 06:09:08 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:18.716 06:09:08 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:18.716 06:09:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:18.716 06:09:08 -- common/autotest_common.sh@1457 -- # uname 00:05:18.716 06:09:08 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:18.716 06:09:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:18.716 06:09:08 -- common/autotest_common.sh@1477 -- # uname 00:05:18.716 06:09:08 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:18.716 06:09:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:18.716 06:09:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:18.716 lcov: LCOV version 1.15 00:05:18.716 06:09:08 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:36.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:36.839 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:58.766 06:09:45 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:58.766 06:09:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:58.766 06:09:45 -- common/autotest_common.sh@10 -- # set +x 00:05:58.766 06:09:45 -- spdk/autotest.sh@78 -- # rm -f 00:05:58.766 06:09:45 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:58.766 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:05:58.766 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:58.766 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:58.766 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:58.766 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:58.766 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:58.766 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:58.766 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:58.766 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:58.766 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:58.766 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:58.766 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:58.766 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:58.766 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:58.766 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:58.766 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:58.766 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:58.766 06:09:47 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:58.766 06:09:47 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:58.766 06:09:47 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:58.766 06:09:47 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:58.766 06:09:47 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:58.766 06:09:47 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:58.766 06:09:47 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:58.766 06:09:47 -- common/autotest_common.sh@1669 -- # bdf=0000:82:00.0 00:05:58.766 06:09:47 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:58.766 06:09:47 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:58.766 06:09:47 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:58.766 06:09:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:58.766 06:09:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:58.766 06:09:47 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:58.766 06:09:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:58.766 06:09:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:58.766 06:09:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:58.766 06:09:47 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:58.766 06:09:47 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:58.766 No valid GPT data, bailing 00:05:58.766 06:09:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:58.766 06:09:47 -- scripts/common.sh@394 -- # pt= 00:05:58.766 06:09:47 -- scripts/common.sh@395 -- # return 1 00:05:58.766 06:09:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:58.766 1+0 records in 00:05:58.766 1+0 records out 00:05:58.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00231788 s, 452 MB/s 00:05:58.766 06:09:47 -- spdk/autotest.sh@105 -- # sync 00:05:58.766 06:09:47 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:58.766 06:09:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:58.766 06:09:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:59.398 06:09:49 -- spdk/autotest.sh@111 -- # uname -s 00:05:59.398 06:09:49 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:59.398 06:09:49 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:59.398 06:09:49 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:00.776 Hugepages 00:06:00.776 node hugesize free / total 00:06:00.776 node0 1048576kB 0 / 0 00:06:00.776 node0 2048kB 0 / 0 00:06:00.776 node1 1048576kB 0 / 0 00:06:00.776 node1 2048kB 0 / 0 00:06:00.776 00:06:00.776 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:00.776 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:06:00.776 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:06:00.776 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:06:00.776 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:06:00.776 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:06:00.776 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:06:00.776 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:06:00.776 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:06:00.776 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:06:00.776 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:06:00.776 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:06:00.776 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:06:00.776 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:06:00.776 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:06:00.776 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:06:00.776 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:06:00.776 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:06:00.776 06:09:50 -- spdk/autotest.sh@117 -- # uname -s 00:06:00.776 06:09:50 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:00.776 06:09:50 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:00.776 06:09:50 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:02.153 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:02.153 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:02.153 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:02.153 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:02.153 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:02.153 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:02.153 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:02.153 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:02.153 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:02.153 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:02.153 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:02.153 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:02.153 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:02.153 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:02.153 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:02.153 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:03.093 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:06:03.093 06:09:53 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:04.030 06:09:54 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:04.030 06:09:54 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:04.030 06:09:54 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:04.030 06:09:54 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:04.030 06:09:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:04.030 06:09:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:04.030 06:09:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:04.030 06:09:54 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:04.030 06:09:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:04.030 06:09:54 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:04.030 06:09:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:82:00.0 00:06:04.030 06:09:54 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:05.404 Waiting for block devices as requested 00:06:05.404 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:06:05.404 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:05.663 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:05.663 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:05.663 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:05.922 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:05.922 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:05.922 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:05.922 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:06.182 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:06.182 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:06.182 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:06.182 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:06.442 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:06.442 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:06.442 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:06.700 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:06.700 06:09:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:06.700 06:09:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:06:06.700 06:09:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:06.700 06:09:56 -- common/autotest_common.sh@1487 -- # grep 0000:82:00.0/nvme/nvme 00:06:06.700 06:09:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:06:06.700 06:09:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:06:06.700 06:09:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:06:06.700 06:09:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:06.700 06:09:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:06.700 06:09:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:06.700 06:09:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:06.700 06:09:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:06.700 06:09:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:06.700 06:09:56 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:06:06.700 06:09:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:06.700 06:09:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:06.700 06:09:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:06.700 06:09:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:06.700 06:09:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:06.700 06:09:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:06.700 06:09:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:06.700 06:09:56 -- common/autotest_common.sh@1543 -- # continue 00:06:06.700 06:09:56 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:06.700 06:09:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:06.700 06:09:56 -- common/autotest_common.sh@10 -- # set +x 00:06:06.700 06:09:56 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:06.701 06:09:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:06.701 06:09:56 -- common/autotest_common.sh@10 -- # set +x 00:06:06.701 06:09:56 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:08.078 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:08.078 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:08.078 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:08.078 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:08.078 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:08.078 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:08.078 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:08.078 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:08.078 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:08.078 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:08.078 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:08.078 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:08.078 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:08.078 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:08.078 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:08.078 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:09.017 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:06:09.276 06:09:59 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:09.276 06:09:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:09.276 06:09:59 -- common/autotest_common.sh@10 -- # set +x 00:06:09.276 06:09:59 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:09.276 06:09:59 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:09.276 06:09:59 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:09.276 06:09:59 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:09.276 06:09:59 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:09.276 06:09:59 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:09.276 06:09:59 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:09.276 06:09:59 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:09.277 06:09:59 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:09.277 06:09:59 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:09.277 06:09:59 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:09.277 06:09:59 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:09.277 06:09:59 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:09.277 06:09:59 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:09.277 06:09:59 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:82:00.0 00:06:09.277 06:09:59 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:09.277 06:09:59 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:06:09.277 06:09:59 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:06:09.277 06:09:59 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:09.277 06:09:59 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:06:09.277 06:09:59 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:06:09.277 06:09:59 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:82:00.0 00:06:09.277 06:09:59 -- common/autotest_common.sh@1579 -- # [[ -z 0000:82:00.0 ]] 00:06:09.277 06:09:59 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=936305 00:06:09.277 06:09:59 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.277 06:09:59 -- common/autotest_common.sh@1585 -- # waitforlisten 936305 00:06:09.277 06:09:59 -- common/autotest_common.sh@835 -- # '[' -z 936305 ']' 00:06:09.277 06:09:59 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.277 06:09:59 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.277 06:09:59 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.277 06:09:59 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.277 06:09:59 -- common/autotest_common.sh@10 -- # set +x 00:06:09.277 [2024-12-08 06:09:59.342930] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:06:09.277 [2024-12-08 06:09:59.343040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid936305 ] 00:06:09.536 [2024-12-08 06:09:59.409652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.536 [2024-12-08 06:09:59.469273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.796 06:09:59 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.796 06:09:59 -- common/autotest_common.sh@868 -- # return 0 00:06:09.796 06:09:59 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:06:09.796 06:09:59 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:06:09.796 06:09:59 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:06:13.097 nvme0n1 00:06:13.097 06:10:02 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:13.098 [2024-12-08 06:10:03.099006] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:06:13.098 [2024-12-08 06:10:03.099082] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:06:13.098 request: 00:06:13.098 { 00:06:13.098 "nvme_ctrlr_name": "nvme0", 00:06:13.098 "password": "test", 00:06:13.098 "method": "bdev_nvme_opal_revert", 00:06:13.098 "req_id": 1 00:06:13.098 } 00:06:13.098 Got JSON-RPC error response 00:06:13.098 response: 00:06:13.098 { 00:06:13.098 "code": -32603, 00:06:13.098 "message": "Internal error" 00:06:13.098 } 00:06:13.098 06:10:03 -- common/autotest_common.sh@1591 -- # true 00:06:13.098 06:10:03 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:06:13.098 06:10:03 -- common/autotest_common.sh@1595 -- # killprocess 936305 00:06:13.098 06:10:03 -- common/autotest_common.sh@954 -- # '[' -z 936305 ']' 00:06:13.098 06:10:03 -- common/autotest_common.sh@958 -- # kill -0 936305 00:06:13.098 06:10:03 -- common/autotest_common.sh@959 -- # uname 00:06:13.098 06:10:03 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.098 06:10:03 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 936305 00:06:13.098 06:10:03 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.098 06:10:03 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.098 06:10:03 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 936305' 00:06:13.098 killing process with pid 936305 00:06:13.098 06:10:03 -- common/autotest_common.sh@973 -- # kill 936305 00:06:13.098 06:10:03 -- common/autotest_common.sh@978 -- # wait 936305 00:06:15.006 06:10:04 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:15.006 06:10:04 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:15.006 06:10:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:15.006 06:10:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:15.007 06:10:04 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:15.007 06:10:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:15.007 06:10:04 -- common/autotest_common.sh@10 -- # set +x 00:06:15.007 06:10:04 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:15.007 06:10:04 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:15.007 06:10:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.007 06:10:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.007 06:10:04 -- common/autotest_common.sh@10 -- # set +x 00:06:15.007 ************************************ 00:06:15.007 START TEST env 00:06:15.007 ************************************ 00:06:15.007 06:10:04 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:15.007 * Looking for test storage... 00:06:15.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:15.007 06:10:05 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:15.007 06:10:05 env -- common/autotest_common.sh@1711 -- # lcov --version 00:06:15.007 06:10:05 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:15.007 06:10:05 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:15.007 06:10:05 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.007 06:10:05 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.007 06:10:05 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.007 06:10:05 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.007 06:10:05 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.007 06:10:05 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.007 06:10:05 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.007 06:10:05 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.007 06:10:05 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.007 06:10:05 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.007 06:10:05 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.007 06:10:05 env -- scripts/common.sh@344 -- # case "$op" in 00:06:15.007 06:10:05 env -- scripts/common.sh@345 -- # : 1 00:06:15.007 06:10:05 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.007 06:10:05 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.007 06:10:05 env -- scripts/common.sh@365 -- # decimal 1 00:06:15.007 06:10:05 env -- scripts/common.sh@353 -- # local d=1 00:06:15.007 06:10:05 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.007 06:10:05 env -- scripts/common.sh@355 -- # echo 1 00:06:15.007 06:10:05 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.007 06:10:05 env -- scripts/common.sh@366 -- # decimal 2 00:06:15.007 06:10:05 env -- scripts/common.sh@353 -- # local d=2 00:06:15.007 06:10:05 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.007 06:10:05 env -- scripts/common.sh@355 -- # echo 2 00:06:15.007 06:10:05 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.007 06:10:05 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.007 06:10:05 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.007 06:10:05 env -- scripts/common.sh@368 -- # return 0 00:06:15.007 06:10:05 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.007 06:10:05 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:15.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.007 --rc genhtml_branch_coverage=1 00:06:15.007 --rc genhtml_function_coverage=1 00:06:15.007 --rc genhtml_legend=1 00:06:15.007 --rc geninfo_all_blocks=1 00:06:15.007 --rc geninfo_unexecuted_blocks=1 00:06:15.007 00:06:15.007 ' 00:06:15.007 06:10:05 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:15.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.007 --rc genhtml_branch_coverage=1 00:06:15.007 --rc genhtml_function_coverage=1 00:06:15.007 --rc genhtml_legend=1 00:06:15.007 --rc geninfo_all_blocks=1 00:06:15.007 --rc geninfo_unexecuted_blocks=1 00:06:15.007 00:06:15.007 ' 00:06:15.007 06:10:05 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:15.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.007 --rc genhtml_branch_coverage=1 00:06:15.007 --rc genhtml_function_coverage=1 00:06:15.007 --rc genhtml_legend=1 00:06:15.007 --rc geninfo_all_blocks=1 00:06:15.007 --rc geninfo_unexecuted_blocks=1 00:06:15.007 00:06:15.007 ' 00:06:15.007 06:10:05 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:15.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.007 --rc genhtml_branch_coverage=1 00:06:15.007 --rc genhtml_function_coverage=1 00:06:15.007 --rc genhtml_legend=1 00:06:15.007 --rc geninfo_all_blocks=1 00:06:15.007 --rc geninfo_unexecuted_blocks=1 00:06:15.007 00:06:15.007 ' 00:06:15.007 06:10:05 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:15.007 06:10:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.007 06:10:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.007 06:10:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:15.266 ************************************ 00:06:15.266 START TEST env_memory 00:06:15.266 ************************************ 00:06:15.266 06:10:05 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:15.266 00:06:15.266 00:06:15.266 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.266 http://cunit.sourceforge.net/ 00:06:15.266 00:06:15.266 00:06:15.266 Suite: memory 00:06:15.266 Test: alloc and free memory map ...[2024-12-08 06:10:05.183474] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:15.266 passed 00:06:15.266 Test: mem map translation ...[2024-12-08 06:10:05.203529] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:15.266 [2024-12-08 06:10:05.203549] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:15.266 [2024-12-08 06:10:05.203604] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:15.266 [2024-12-08 06:10:05.203615] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:15.266 passed 00:06:15.266 Test: mem map registration ...[2024-12-08 06:10:05.244467] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:15.266 [2024-12-08 06:10:05.244485] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:15.266 passed 00:06:15.266 Test: mem map adjacent registrations ...passed 00:06:15.266 00:06:15.266 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.266 suites 1 1 n/a 0 0 00:06:15.266 tests 4 4 4 0 0 00:06:15.266 asserts 152 152 152 0 n/a 00:06:15.266 00:06:15.266 Elapsed time = 0.144 seconds 00:06:15.266 00:06:15.266 real 0m0.152s 00:06:15.266 user 0m0.145s 00:06:15.266 sys 0m0.007s 00:06:15.266 06:10:05 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.266 06:10:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:15.266 ************************************ 00:06:15.266 END TEST env_memory 00:06:15.266 ************************************ 00:06:15.266 06:10:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:15.266 06:10:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.266 06:10:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.266 06:10:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:15.266 ************************************ 00:06:15.266 START TEST env_vtophys 00:06:15.266 ************************************ 00:06:15.266 06:10:05 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:15.266 EAL: lib.eal log level changed from notice to debug 00:06:15.266 EAL: Detected lcore 0 as core 0 on socket 0 00:06:15.266 EAL: Detected lcore 1 as core 1 on socket 0 00:06:15.266 EAL: Detected lcore 2 as core 2 on socket 0 00:06:15.266 EAL: Detected lcore 3 as core 3 on socket 0 00:06:15.266 EAL: Detected lcore 4 as core 4 on socket 0 00:06:15.266 EAL: Detected lcore 5 as core 5 on socket 0 00:06:15.266 EAL: Detected lcore 6 as core 8 on socket 0 00:06:15.266 EAL: Detected lcore 7 as core 9 on socket 0 00:06:15.266 EAL: Detected lcore 8 as core 10 on socket 0 00:06:15.266 EAL: Detected lcore 9 as core 11 on socket 0 00:06:15.266 EAL: Detected lcore 10 as core 12 on socket 0 00:06:15.266 EAL: Detected lcore 11 as core 13 on socket 0 00:06:15.266 EAL: Detected lcore 12 as core 0 on socket 1 00:06:15.266 EAL: Detected lcore 13 as core 1 on socket 1 00:06:15.266 EAL: Detected lcore 14 as core 2 on socket 1 00:06:15.266 EAL: Detected lcore 15 as core 3 on socket 1 00:06:15.266 EAL: Detected lcore 16 as core 4 on socket 1 00:06:15.266 EAL: Detected lcore 17 as core 5 on socket 1 00:06:15.266 EAL: Detected lcore 18 as core 8 on socket 1 00:06:15.266 EAL: Detected lcore 19 as core 9 on socket 1 00:06:15.266 EAL: Detected lcore 20 as core 10 on socket 1 00:06:15.266 EAL: Detected lcore 21 as core 11 on socket 1 00:06:15.266 EAL: Detected lcore 22 as core 12 on socket 1 00:06:15.266 EAL: Detected lcore 23 as core 13 on socket 1 00:06:15.266 EAL: Detected lcore 24 as core 0 on socket 0 00:06:15.266 EAL: Detected lcore 25 as core 1 on socket 0 00:06:15.266 EAL: Detected lcore 26 as core 2 on socket 0 00:06:15.266 EAL: Detected lcore 27 as core 3 on socket 0 00:06:15.266 EAL: Detected lcore 28 as core 4 on socket 0 00:06:15.266 EAL: Detected lcore 29 as core 5 on socket 0 00:06:15.266 EAL: Detected lcore 30 as core 8 on socket 0 00:06:15.266 EAL: Detected lcore 31 as core 9 on socket 0 00:06:15.266 EAL: Detected lcore 32 as core 10 on socket 0 00:06:15.266 EAL: Detected lcore 33 as core 11 on socket 0 00:06:15.266 EAL: Detected lcore 34 as core 12 on socket 0 00:06:15.266 EAL: Detected lcore 35 as core 13 on socket 0 00:06:15.266 EAL: Detected lcore 36 as core 0 on socket 1 00:06:15.266 EAL: Detected lcore 37 as core 1 on socket 1 00:06:15.266 EAL: Detected lcore 38 as core 2 on socket 1 00:06:15.266 EAL: Detected lcore 39 as core 3 on socket 1 00:06:15.266 EAL: Detected lcore 40 as core 4 on socket 1 00:06:15.266 EAL: Detected lcore 41 as core 5 on socket 1 00:06:15.266 EAL: Detected lcore 42 as core 8 on socket 1 00:06:15.266 EAL: Detected lcore 43 as core 9 on socket 1 00:06:15.266 EAL: Detected lcore 44 as core 10 on socket 1 00:06:15.266 EAL: Detected lcore 45 as core 11 on socket 1 00:06:15.266 EAL: Detected lcore 46 as core 12 on socket 1 00:06:15.266 EAL: Detected lcore 47 as core 13 on socket 1 00:06:15.266 EAL: Maximum logical cores by configuration: 128 00:06:15.266 EAL: Detected CPU lcores: 48 00:06:15.266 EAL: Detected NUMA nodes: 2 00:06:15.266 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:15.266 EAL: Detected shared linkage of DPDK 00:06:15.266 EAL: No shared files mode enabled, IPC will be disabled 00:06:15.526 EAL: Bus pci wants IOVA as 'DC' 00:06:15.526 EAL: Buses did not request a specific IOVA mode. 00:06:15.526 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:15.526 EAL: Selected IOVA mode 'VA' 00:06:15.526 EAL: Probing VFIO support... 00:06:15.526 EAL: IOMMU type 1 (Type 1) is supported 00:06:15.526 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:15.526 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:15.526 EAL: VFIO support initialized 00:06:15.526 EAL: Ask a virtual area of 0x2e000 bytes 00:06:15.526 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:15.526 EAL: Setting up physically contiguous memory... 00:06:15.526 EAL: Setting maximum number of open files to 524288 00:06:15.526 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:15.526 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:15.526 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:15.526 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.526 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:15.526 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:15.526 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.526 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:15.526 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:15.526 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.526 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:15.526 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:15.526 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.526 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:15.526 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:15.526 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.526 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:15.526 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:15.526 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.526 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:15.526 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:15.526 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.526 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:15.526 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:15.526 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.526 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:15.526 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:15.526 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:15.526 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.526 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:15.526 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:15.526 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.526 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:15.527 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:15.527 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.527 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:15.527 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:15.527 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.527 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:15.527 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:15.527 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.527 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:15.527 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:15.527 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.527 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:15.527 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:15.527 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.527 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:15.527 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:15.527 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.527 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:15.527 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:15.527 EAL: Hugepages will be freed exactly as allocated. 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: TSC frequency is ~2700000 KHz 00:06:15.527 EAL: Main lcore 0 is ready (tid=7eff56fa5a00;cpuset=[0]) 00:06:15.527 EAL: Trying to obtain current memory policy. 00:06:15.527 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.527 EAL: Restoring previous memory policy: 0 00:06:15.527 EAL: request: mp_malloc_sync 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: Heap on socket 0 was expanded by 2MB 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:15.527 EAL: Mem event callback 'spdk:(nil)' registered 00:06:15.527 00:06:15.527 00:06:15.527 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.527 http://cunit.sourceforge.net/ 00:06:15.527 00:06:15.527 00:06:15.527 Suite: components_suite 00:06:15.527 Test: vtophys_malloc_test ...passed 00:06:15.527 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:15.527 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.527 EAL: Restoring previous memory policy: 4 00:06:15.527 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.527 EAL: request: mp_malloc_sync 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: Heap on socket 0 was expanded by 4MB 00:06:15.527 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.527 EAL: request: mp_malloc_sync 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: Heap on socket 0 was shrunk by 4MB 00:06:15.527 EAL: Trying to obtain current memory policy. 00:06:15.527 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.527 EAL: Restoring previous memory policy: 4 00:06:15.527 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.527 EAL: request: mp_malloc_sync 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: Heap on socket 0 was expanded by 6MB 00:06:15.527 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.527 EAL: request: mp_malloc_sync 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: Heap on socket 0 was shrunk by 6MB 00:06:15.527 EAL: Trying to obtain current memory policy. 00:06:15.527 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.527 EAL: Restoring previous memory policy: 4 00:06:15.527 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.527 EAL: request: mp_malloc_sync 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: Heap on socket 0 was expanded by 10MB 00:06:15.527 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.527 EAL: request: mp_malloc_sync 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: Heap on socket 0 was shrunk by 10MB 00:06:15.527 EAL: Trying to obtain current memory policy. 00:06:15.527 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.527 EAL: Restoring previous memory policy: 4 00:06:15.527 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.527 EAL: request: mp_malloc_sync 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: Heap on socket 0 was expanded by 18MB 00:06:15.527 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.527 EAL: request: mp_malloc_sync 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: Heap on socket 0 was shrunk by 18MB 00:06:15.527 EAL: Trying to obtain current memory policy. 00:06:15.527 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.527 EAL: Restoring previous memory policy: 4 00:06:15.527 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.527 EAL: request: mp_malloc_sync 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: Heap on socket 0 was expanded by 34MB 00:06:15.527 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.527 EAL: request: mp_malloc_sync 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: Heap on socket 0 was shrunk by 34MB 00:06:15.527 EAL: Trying to obtain current memory policy. 00:06:15.527 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.527 EAL: Restoring previous memory policy: 4 00:06:15.527 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.527 EAL: request: mp_malloc_sync 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: Heap on socket 0 was expanded by 66MB 00:06:15.527 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.527 EAL: request: mp_malloc_sync 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: Heap on socket 0 was shrunk by 66MB 00:06:15.527 EAL: Trying to obtain current memory policy. 00:06:15.527 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.527 EAL: Restoring previous memory policy: 4 00:06:15.527 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.527 EAL: request: mp_malloc_sync 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: Heap on socket 0 was expanded by 130MB 00:06:15.527 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.527 EAL: request: mp_malloc_sync 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: Heap on socket 0 was shrunk by 130MB 00:06:15.527 EAL: Trying to obtain current memory policy. 00:06:15.527 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.527 EAL: Restoring previous memory policy: 4 00:06:15.527 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.527 EAL: request: mp_malloc_sync 00:06:15.527 EAL: No shared files mode enabled, IPC is disabled 00:06:15.527 EAL: Heap on socket 0 was expanded by 258MB 00:06:15.786 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.786 EAL: request: mp_malloc_sync 00:06:15.786 EAL: No shared files mode enabled, IPC is disabled 00:06:15.786 EAL: Heap on socket 0 was shrunk by 258MB 00:06:15.786 EAL: Trying to obtain current memory policy. 00:06:15.786 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.786 EAL: Restoring previous memory policy: 4 00:06:15.786 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.786 EAL: request: mp_malloc_sync 00:06:15.786 EAL: No shared files mode enabled, IPC is disabled 00:06:15.786 EAL: Heap on socket 0 was expanded by 514MB 00:06:16.046 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.046 EAL: request: mp_malloc_sync 00:06:16.046 EAL: No shared files mode enabled, IPC is disabled 00:06:16.046 EAL: Heap on socket 0 was shrunk by 514MB 00:06:16.046 EAL: Trying to obtain current memory policy. 00:06:16.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.305 EAL: Restoring previous memory policy: 4 00:06:16.305 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.305 EAL: request: mp_malloc_sync 00:06:16.305 EAL: No shared files mode enabled, IPC is disabled 00:06:16.305 EAL: Heap on socket 0 was expanded by 1026MB 00:06:16.582 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.841 EAL: request: mp_malloc_sync 00:06:16.841 EAL: No shared files mode enabled, IPC is disabled 00:06:16.841 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:16.841 passed 00:06:16.841 00:06:16.841 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.841 suites 1 1 n/a 0 0 00:06:16.841 tests 2 2 2 0 0 00:06:16.841 asserts 497 497 497 0 n/a 00:06:16.841 00:06:16.841 Elapsed time = 1.315 seconds 00:06:16.841 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.841 EAL: request: mp_malloc_sync 00:06:16.841 EAL: No shared files mode enabled, IPC is disabled 00:06:16.841 EAL: Heap on socket 0 was shrunk by 2MB 00:06:16.841 EAL: No shared files mode enabled, IPC is disabled 00:06:16.841 EAL: No shared files mode enabled, IPC is disabled 00:06:16.841 EAL: No shared files mode enabled, IPC is disabled 00:06:16.841 00:06:16.841 real 0m1.438s 00:06:16.841 user 0m0.843s 00:06:16.841 sys 0m0.559s 00:06:16.841 06:10:06 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.841 06:10:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:16.841 ************************************ 00:06:16.841 END TEST env_vtophys 00:06:16.841 ************************************ 00:06:16.841 06:10:06 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:16.841 06:10:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.841 06:10:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.842 06:10:06 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.842 ************************************ 00:06:16.842 START TEST env_pci 00:06:16.842 ************************************ 00:06:16.842 06:10:06 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:16.842 00:06:16.842 00:06:16.842 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.842 http://cunit.sourceforge.net/ 00:06:16.842 00:06:16.842 00:06:16.842 Suite: pci 00:06:16.842 Test: pci_hook ...[2024-12-08 06:10:06.850796] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 937204 has claimed it 00:06:16.842 EAL: Cannot find device (10000:00:01.0) 00:06:16.842 EAL: Failed to attach device on primary process 00:06:16.842 passed 00:06:16.842 00:06:16.842 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.842 suites 1 1 n/a 0 0 00:06:16.842 tests 1 1 1 0 0 00:06:16.842 asserts 25 25 25 0 n/a 00:06:16.842 00:06:16.842 Elapsed time = 0.022 seconds 00:06:16.842 00:06:16.842 real 0m0.034s 00:06:16.842 user 0m0.012s 00:06:16.842 sys 0m0.021s 00:06:16.842 06:10:06 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.842 06:10:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:16.842 ************************************ 00:06:16.842 END TEST env_pci 00:06:16.842 ************************************ 00:06:16.842 06:10:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:16.842 06:10:06 env -- env/env.sh@15 -- # uname 00:06:16.842 06:10:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:16.842 06:10:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:16.842 06:10:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:16.842 06:10:06 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:16.842 06:10:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.842 06:10:06 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.842 ************************************ 00:06:16.842 START TEST env_dpdk_post_init 00:06:16.842 ************************************ 00:06:16.842 06:10:06 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:16.842 EAL: Detected CPU lcores: 48 00:06:16.842 EAL: Detected NUMA nodes: 2 00:06:16.842 EAL: Detected shared linkage of DPDK 00:06:16.842 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:17.102 EAL: Selected IOVA mode 'VA' 00:06:17.102 EAL: VFIO support initialized 00:06:17.102 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:17.102 EAL: Using IOMMU type 1 (Type 1) 00:06:17.102 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:17.102 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:17.102 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:17.102 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:17.102 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:17.102 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:17.102 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:17.102 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:17.102 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:17.102 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:17.102 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:17.102 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:17.102 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:17.102 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:17.102 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:17.362 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:17.936 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:06:21.256 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:06:21.256 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:06:21.256 Starting DPDK initialization... 00:06:21.256 Starting SPDK post initialization... 00:06:21.256 SPDK NVMe probe 00:06:21.256 Attaching to 0000:82:00.0 00:06:21.256 Attached to 0000:82:00.0 00:06:21.256 Cleaning up... 00:06:21.256 00:06:21.256 real 0m4.427s 00:06:21.256 user 0m3.059s 00:06:21.256 sys 0m0.430s 00:06:21.256 06:10:11 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.256 06:10:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:21.256 ************************************ 00:06:21.256 END TEST env_dpdk_post_init 00:06:21.256 ************************************ 00:06:21.515 06:10:11 env -- env/env.sh@26 -- # uname 00:06:21.515 06:10:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:21.515 06:10:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:21.515 06:10:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.515 06:10:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.515 06:10:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:21.516 ************************************ 00:06:21.516 START TEST env_mem_callbacks 00:06:21.516 ************************************ 00:06:21.516 06:10:11 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:21.516 EAL: Detected CPU lcores: 48 00:06:21.516 EAL: Detected NUMA nodes: 2 00:06:21.516 EAL: Detected shared linkage of DPDK 00:06:21.516 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:21.516 EAL: Selected IOVA mode 'VA' 00:06:21.516 EAL: VFIO support initialized 00:06:21.516 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:21.516 00:06:21.516 00:06:21.516 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.516 http://cunit.sourceforge.net/ 00:06:21.516 00:06:21.516 00:06:21.516 Suite: memory 00:06:21.516 Test: test ... 00:06:21.516 register 0x200000200000 2097152 00:06:21.516 malloc 3145728 00:06:21.516 register 0x200000400000 4194304 00:06:21.516 buf 0x200000500000 len 3145728 PASSED 00:06:21.516 malloc 64 00:06:21.516 buf 0x2000004fff40 len 64 PASSED 00:06:21.516 malloc 4194304 00:06:21.516 register 0x200000800000 6291456 00:06:21.516 buf 0x200000a00000 len 4194304 PASSED 00:06:21.516 free 0x200000500000 3145728 00:06:21.516 free 0x2000004fff40 64 00:06:21.516 unregister 0x200000400000 4194304 PASSED 00:06:21.516 free 0x200000a00000 4194304 00:06:21.516 unregister 0x200000800000 6291456 PASSED 00:06:21.516 malloc 8388608 00:06:21.516 register 0x200000400000 10485760 00:06:21.516 buf 0x200000600000 len 8388608 PASSED 00:06:21.516 free 0x200000600000 8388608 00:06:21.516 unregister 0x200000400000 10485760 PASSED 00:06:21.516 passed 00:06:21.516 00:06:21.516 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.516 suites 1 1 n/a 0 0 00:06:21.516 tests 1 1 1 0 0 00:06:21.516 asserts 15 15 15 0 n/a 00:06:21.516 00:06:21.516 Elapsed time = 0.005 seconds 00:06:21.516 00:06:21.516 real 0m0.050s 00:06:21.516 user 0m0.017s 00:06:21.516 sys 0m0.032s 00:06:21.516 06:10:11 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.516 06:10:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:21.516 ************************************ 00:06:21.516 END TEST env_mem_callbacks 00:06:21.516 ************************************ 00:06:21.516 00:06:21.516 real 0m6.498s 00:06:21.516 user 0m4.264s 00:06:21.516 sys 0m1.279s 00:06:21.516 06:10:11 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.516 06:10:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:21.516 ************************************ 00:06:21.516 END TEST env 00:06:21.516 ************************************ 00:06:21.516 06:10:11 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:21.516 06:10:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.516 06:10:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.516 06:10:11 -- common/autotest_common.sh@10 -- # set +x 00:06:21.516 ************************************ 00:06:21.516 START TEST rpc 00:06:21.516 ************************************ 00:06:21.516 06:10:11 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:21.516 * Looking for test storage... 00:06:21.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:21.516 06:10:11 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:21.516 06:10:11 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:21.516 06:10:11 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:21.774 06:10:11 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:21.774 06:10:11 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.774 06:10:11 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.774 06:10:11 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.774 06:10:11 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.774 06:10:11 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.774 06:10:11 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.774 06:10:11 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.774 06:10:11 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.774 06:10:11 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.774 06:10:11 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.774 06:10:11 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.774 06:10:11 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:21.774 06:10:11 rpc -- scripts/common.sh@345 -- # : 1 00:06:21.774 06:10:11 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.774 06:10:11 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.774 06:10:11 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:21.774 06:10:11 rpc -- scripts/common.sh@353 -- # local d=1 00:06:21.774 06:10:11 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.774 06:10:11 rpc -- scripts/common.sh@355 -- # echo 1 00:06:21.774 06:10:11 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.774 06:10:11 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:21.774 06:10:11 rpc -- scripts/common.sh@353 -- # local d=2 00:06:21.774 06:10:11 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.774 06:10:11 rpc -- scripts/common.sh@355 -- # echo 2 00:06:21.774 06:10:11 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.774 06:10:11 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.774 06:10:11 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.774 06:10:11 rpc -- scripts/common.sh@368 -- # return 0 00:06:21.774 06:10:11 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.774 06:10:11 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:21.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.774 --rc genhtml_branch_coverage=1 00:06:21.774 --rc genhtml_function_coverage=1 00:06:21.774 --rc genhtml_legend=1 00:06:21.774 --rc geninfo_all_blocks=1 00:06:21.774 --rc geninfo_unexecuted_blocks=1 00:06:21.774 00:06:21.774 ' 00:06:21.774 06:10:11 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:21.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.774 --rc genhtml_branch_coverage=1 00:06:21.774 --rc genhtml_function_coverage=1 00:06:21.774 --rc genhtml_legend=1 00:06:21.774 --rc geninfo_all_blocks=1 00:06:21.774 --rc geninfo_unexecuted_blocks=1 00:06:21.774 00:06:21.774 ' 00:06:21.774 06:10:11 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:21.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.774 --rc genhtml_branch_coverage=1 00:06:21.774 --rc genhtml_function_coverage=1 00:06:21.774 --rc genhtml_legend=1 00:06:21.774 --rc geninfo_all_blocks=1 00:06:21.774 --rc geninfo_unexecuted_blocks=1 00:06:21.774 00:06:21.774 ' 00:06:21.774 06:10:11 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:21.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.774 --rc genhtml_branch_coverage=1 00:06:21.774 --rc genhtml_function_coverage=1 00:06:21.774 --rc genhtml_legend=1 00:06:21.774 --rc geninfo_all_blocks=1 00:06:21.774 --rc geninfo_unexecuted_blocks=1 00:06:21.774 00:06:21.774 ' 00:06:21.774 06:10:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=937988 00:06:21.774 06:10:11 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:21.774 06:10:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.774 06:10:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 937988 00:06:21.774 06:10:11 rpc -- common/autotest_common.sh@835 -- # '[' -z 937988 ']' 00:06:21.774 06:10:11 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.774 06:10:11 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.774 06:10:11 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.774 06:10:11 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.774 06:10:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.774 [2024-12-08 06:10:11.717533] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:06:21.774 [2024-12-08 06:10:11.717615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid937988 ] 00:06:21.774 [2024-12-08 06:10:11.782729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.774 [2024-12-08 06:10:11.837912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:21.774 [2024-12-08 06:10:11.837973] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 937988' to capture a snapshot of events at runtime. 00:06:21.774 [2024-12-08 06:10:11.838000] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:21.774 [2024-12-08 06:10:11.838012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:21.774 [2024-12-08 06:10:11.838021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid937988 for offline analysis/debug. 00:06:21.775 [2024-12-08 06:10:11.838632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.033 06:10:12 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.033 06:10:12 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:22.033 06:10:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:22.033 06:10:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:22.033 06:10:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:22.033 06:10:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:22.033 06:10:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.033 06:10:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.033 06:10:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.033 ************************************ 00:06:22.033 START TEST rpc_integrity 00:06:22.033 ************************************ 00:06:22.033 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:22.033 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:22.033 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.033 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.033 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.033 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:22.033 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:22.295 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:22.295 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:22.295 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.295 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.295 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.295 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:22.295 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:22.295 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.295 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.295 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.295 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:22.295 { 00:06:22.295 "name": "Malloc0", 00:06:22.295 "aliases": [ 00:06:22.295 "779f7e9a-35be-42ea-9c9a-b6ae8540f517" 00:06:22.295 ], 00:06:22.295 "product_name": "Malloc disk", 00:06:22.295 "block_size": 512, 00:06:22.295 "num_blocks": 16384, 00:06:22.295 "uuid": "779f7e9a-35be-42ea-9c9a-b6ae8540f517", 00:06:22.295 "assigned_rate_limits": { 00:06:22.295 "rw_ios_per_sec": 0, 00:06:22.295 "rw_mbytes_per_sec": 0, 00:06:22.295 "r_mbytes_per_sec": 0, 00:06:22.295 "w_mbytes_per_sec": 0 00:06:22.295 }, 00:06:22.295 "claimed": false, 00:06:22.296 "zoned": false, 00:06:22.296 "supported_io_types": { 00:06:22.296 "read": true, 00:06:22.296 "write": true, 00:06:22.296 "unmap": true, 00:06:22.296 "flush": true, 00:06:22.296 "reset": true, 00:06:22.296 "nvme_admin": false, 00:06:22.296 "nvme_io": false, 00:06:22.296 "nvme_io_md": false, 00:06:22.296 "write_zeroes": true, 00:06:22.296 "zcopy": true, 00:06:22.296 "get_zone_info": false, 00:06:22.296 "zone_management": false, 00:06:22.296 "zone_append": false, 00:06:22.296 "compare": false, 00:06:22.296 "compare_and_write": false, 00:06:22.296 "abort": true, 00:06:22.296 "seek_hole": false, 00:06:22.296 "seek_data": false, 00:06:22.296 "copy": true, 00:06:22.296 "nvme_iov_md": false 00:06:22.296 }, 00:06:22.296 "memory_domains": [ 00:06:22.296 { 00:06:22.296 "dma_device_id": "system", 00:06:22.296 "dma_device_type": 1 00:06:22.296 }, 00:06:22.296 { 00:06:22.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.296 "dma_device_type": 2 00:06:22.296 } 00:06:22.296 ], 00:06:22.296 "driver_specific": {} 00:06:22.296 } 00:06:22.296 ]' 00:06:22.296 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:22.296 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:22.296 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:22.296 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.296 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.296 [2024-12-08 06:10:12.225048] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:22.296 [2024-12-08 06:10:12.225102] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:22.296 [2024-12-08 06:10:12.225121] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x93a6a0 00:06:22.296 [2024-12-08 06:10:12.225133] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:22.296 [2024-12-08 06:10:12.226448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:22.296 [2024-12-08 06:10:12.226470] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:22.296 Passthru0 00:06:22.296 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.296 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:22.296 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.296 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.296 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.296 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:22.296 { 00:06:22.296 "name": "Malloc0", 00:06:22.296 "aliases": [ 00:06:22.296 "779f7e9a-35be-42ea-9c9a-b6ae8540f517" 00:06:22.296 ], 00:06:22.296 "product_name": "Malloc disk", 00:06:22.296 "block_size": 512, 00:06:22.296 "num_blocks": 16384, 00:06:22.296 "uuid": "779f7e9a-35be-42ea-9c9a-b6ae8540f517", 00:06:22.296 "assigned_rate_limits": { 00:06:22.296 "rw_ios_per_sec": 0, 00:06:22.296 "rw_mbytes_per_sec": 0, 00:06:22.296 "r_mbytes_per_sec": 0, 00:06:22.296 "w_mbytes_per_sec": 0 00:06:22.296 }, 00:06:22.296 "claimed": true, 00:06:22.296 "claim_type": "exclusive_write", 00:06:22.296 "zoned": false, 00:06:22.296 "supported_io_types": { 00:06:22.296 "read": true, 00:06:22.296 "write": true, 00:06:22.296 "unmap": true, 00:06:22.296 "flush": true, 00:06:22.296 "reset": true, 00:06:22.296 "nvme_admin": false, 00:06:22.296 "nvme_io": false, 00:06:22.296 "nvme_io_md": false, 00:06:22.296 "write_zeroes": true, 00:06:22.296 "zcopy": true, 00:06:22.296 "get_zone_info": false, 00:06:22.296 "zone_management": false, 00:06:22.296 "zone_append": false, 00:06:22.296 "compare": false, 00:06:22.296 "compare_and_write": false, 00:06:22.296 "abort": true, 00:06:22.296 "seek_hole": false, 00:06:22.296 "seek_data": false, 00:06:22.296 "copy": true, 00:06:22.296 "nvme_iov_md": false 00:06:22.296 }, 00:06:22.296 "memory_domains": [ 00:06:22.296 { 00:06:22.296 "dma_device_id": "system", 00:06:22.296 "dma_device_type": 1 00:06:22.296 }, 00:06:22.296 { 00:06:22.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.296 "dma_device_type": 2 00:06:22.296 } 00:06:22.296 ], 00:06:22.296 "driver_specific": {} 00:06:22.296 }, 00:06:22.296 { 00:06:22.296 "name": "Passthru0", 00:06:22.296 "aliases": [ 00:06:22.296 "05657afc-140e-553d-91b7-87bbb291159e" 00:06:22.296 ], 00:06:22.296 "product_name": "passthru", 00:06:22.296 "block_size": 512, 00:06:22.296 "num_blocks": 16384, 00:06:22.296 "uuid": "05657afc-140e-553d-91b7-87bbb291159e", 00:06:22.296 "assigned_rate_limits": { 00:06:22.296 "rw_ios_per_sec": 0, 00:06:22.296 "rw_mbytes_per_sec": 0, 00:06:22.296 "r_mbytes_per_sec": 0, 00:06:22.296 "w_mbytes_per_sec": 0 00:06:22.296 }, 00:06:22.296 "claimed": false, 00:06:22.296 "zoned": false, 00:06:22.296 "supported_io_types": { 00:06:22.296 "read": true, 00:06:22.296 "write": true, 00:06:22.296 "unmap": true, 00:06:22.296 "flush": true, 00:06:22.296 "reset": true, 00:06:22.296 "nvme_admin": false, 00:06:22.296 "nvme_io": false, 00:06:22.296 "nvme_io_md": false, 00:06:22.296 "write_zeroes": true, 00:06:22.296 "zcopy": true, 00:06:22.296 "get_zone_info": false, 00:06:22.296 "zone_management": false, 00:06:22.296 "zone_append": false, 00:06:22.296 "compare": false, 00:06:22.296 "compare_and_write": false, 00:06:22.296 "abort": true, 00:06:22.296 "seek_hole": false, 00:06:22.296 "seek_data": false, 00:06:22.296 "copy": true, 00:06:22.296 "nvme_iov_md": false 00:06:22.296 }, 00:06:22.296 "memory_domains": [ 00:06:22.296 { 00:06:22.296 "dma_device_id": "system", 00:06:22.296 "dma_device_type": 1 00:06:22.296 }, 00:06:22.296 { 00:06:22.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.296 "dma_device_type": 2 00:06:22.296 } 00:06:22.296 ], 00:06:22.296 "driver_specific": { 00:06:22.296 "passthru": { 00:06:22.296 "name": "Passthru0", 00:06:22.296 "base_bdev_name": "Malloc0" 00:06:22.296 } 00:06:22.296 } 00:06:22.296 } 00:06:22.296 ]' 00:06:22.296 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:22.296 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:22.296 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:22.296 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.296 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.296 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.296 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:22.296 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.296 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.296 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.296 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:22.296 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.296 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.296 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.296 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:22.296 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:22.296 06:10:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:22.296 00:06:22.296 real 0m0.212s 00:06:22.296 user 0m0.135s 00:06:22.296 sys 0m0.021s 00:06:22.296 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.296 06:10:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.296 ************************************ 00:06:22.296 END TEST rpc_integrity 00:06:22.296 ************************************ 00:06:22.296 06:10:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:22.296 06:10:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.296 06:10:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.296 06:10:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.296 ************************************ 00:06:22.296 START TEST rpc_plugins 00:06:22.296 ************************************ 00:06:22.296 06:10:12 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:22.296 06:10:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:22.296 06:10:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.296 06:10:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.296 06:10:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.296 06:10:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:22.296 06:10:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:22.296 06:10:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.296 06:10:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.296 06:10:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.296 06:10:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:22.296 { 00:06:22.296 "name": "Malloc1", 00:06:22.296 "aliases": [ 00:06:22.297 "09e6b311-aa54-4e34-afc3-1bb91358cf6c" 00:06:22.297 ], 00:06:22.297 "product_name": "Malloc disk", 00:06:22.297 "block_size": 4096, 00:06:22.297 "num_blocks": 256, 00:06:22.297 "uuid": "09e6b311-aa54-4e34-afc3-1bb91358cf6c", 00:06:22.297 "assigned_rate_limits": { 00:06:22.297 "rw_ios_per_sec": 0, 00:06:22.297 "rw_mbytes_per_sec": 0, 00:06:22.297 "r_mbytes_per_sec": 0, 00:06:22.297 "w_mbytes_per_sec": 0 00:06:22.297 }, 00:06:22.297 "claimed": false, 00:06:22.297 "zoned": false, 00:06:22.297 "supported_io_types": { 00:06:22.297 "read": true, 00:06:22.297 "write": true, 00:06:22.297 "unmap": true, 00:06:22.297 "flush": true, 00:06:22.297 "reset": true, 00:06:22.297 "nvme_admin": false, 00:06:22.297 "nvme_io": false, 00:06:22.297 "nvme_io_md": false, 00:06:22.297 "write_zeroes": true, 00:06:22.297 "zcopy": true, 00:06:22.297 "get_zone_info": false, 00:06:22.297 "zone_management": false, 00:06:22.297 "zone_append": false, 00:06:22.297 "compare": false, 00:06:22.297 "compare_and_write": false, 00:06:22.297 "abort": true, 00:06:22.297 "seek_hole": false, 00:06:22.297 "seek_data": false, 00:06:22.297 "copy": true, 00:06:22.297 "nvme_iov_md": false 00:06:22.297 }, 00:06:22.297 "memory_domains": [ 00:06:22.297 { 00:06:22.297 "dma_device_id": "system", 00:06:22.297 "dma_device_type": 1 00:06:22.297 }, 00:06:22.297 { 00:06:22.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.297 "dma_device_type": 2 00:06:22.297 } 00:06:22.297 ], 00:06:22.297 "driver_specific": {} 00:06:22.297 } 00:06:22.297 ]' 00:06:22.297 06:10:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:22.557 06:10:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:22.557 06:10:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:22.557 06:10:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.557 06:10:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.557 06:10:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.557 06:10:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:22.557 06:10:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.557 06:10:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.557 06:10:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.557 06:10:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:22.557 06:10:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:22.557 06:10:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:22.557 00:06:22.557 real 0m0.108s 00:06:22.557 user 0m0.071s 00:06:22.557 sys 0m0.007s 00:06:22.557 06:10:12 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.557 06:10:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.557 ************************************ 00:06:22.557 END TEST rpc_plugins 00:06:22.557 ************************************ 00:06:22.557 06:10:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:22.557 06:10:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.557 06:10:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.557 06:10:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.557 ************************************ 00:06:22.557 START TEST rpc_trace_cmd_test 00:06:22.557 ************************************ 00:06:22.557 06:10:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:22.557 06:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:22.557 06:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:22.557 06:10:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.557 06:10:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.557 06:10:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.557 06:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:22.557 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid937988", 00:06:22.557 "tpoint_group_mask": "0x8", 00:06:22.557 "iscsi_conn": { 00:06:22.557 "mask": "0x2", 00:06:22.557 "tpoint_mask": "0x0" 00:06:22.557 }, 00:06:22.557 "scsi": { 00:06:22.557 "mask": "0x4", 00:06:22.557 "tpoint_mask": "0x0" 00:06:22.557 }, 00:06:22.557 "bdev": { 00:06:22.557 "mask": "0x8", 00:06:22.557 "tpoint_mask": "0xffffffffffffffff" 00:06:22.557 }, 00:06:22.557 "nvmf_rdma": { 00:06:22.557 "mask": "0x10", 00:06:22.557 "tpoint_mask": "0x0" 00:06:22.557 }, 00:06:22.557 "nvmf_tcp": { 00:06:22.557 "mask": "0x20", 00:06:22.557 "tpoint_mask": "0x0" 00:06:22.557 }, 00:06:22.557 "ftl": { 00:06:22.557 "mask": "0x40", 00:06:22.557 "tpoint_mask": "0x0" 00:06:22.557 }, 00:06:22.557 "blobfs": { 00:06:22.557 "mask": "0x80", 00:06:22.557 "tpoint_mask": "0x0" 00:06:22.557 }, 00:06:22.557 "dsa": { 00:06:22.557 "mask": "0x200", 00:06:22.557 "tpoint_mask": "0x0" 00:06:22.557 }, 00:06:22.557 "thread": { 00:06:22.557 "mask": "0x400", 00:06:22.557 "tpoint_mask": "0x0" 00:06:22.557 }, 00:06:22.557 "nvme_pcie": { 00:06:22.557 "mask": "0x800", 00:06:22.557 "tpoint_mask": "0x0" 00:06:22.557 }, 00:06:22.557 "iaa": { 00:06:22.557 "mask": "0x1000", 00:06:22.557 "tpoint_mask": "0x0" 00:06:22.557 }, 00:06:22.557 "nvme_tcp": { 00:06:22.557 "mask": "0x2000", 00:06:22.557 "tpoint_mask": "0x0" 00:06:22.557 }, 00:06:22.557 "bdev_nvme": { 00:06:22.557 "mask": "0x4000", 00:06:22.557 "tpoint_mask": "0x0" 00:06:22.557 }, 00:06:22.557 "sock": { 00:06:22.557 "mask": "0x8000", 00:06:22.557 "tpoint_mask": "0x0" 00:06:22.557 }, 00:06:22.557 "blob": { 00:06:22.557 "mask": "0x10000", 00:06:22.557 "tpoint_mask": "0x0" 00:06:22.557 }, 00:06:22.557 "bdev_raid": { 00:06:22.557 "mask": "0x20000", 00:06:22.557 "tpoint_mask": "0x0" 00:06:22.557 }, 00:06:22.557 "scheduler": { 00:06:22.557 "mask": "0x40000", 00:06:22.557 "tpoint_mask": "0x0" 00:06:22.557 } 00:06:22.557 }' 00:06:22.557 06:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:22.557 06:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:22.557 06:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:22.557 06:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:22.557 06:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:22.557 06:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:22.557 06:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:22.818 06:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:22.818 06:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:22.818 06:10:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:22.818 00:06:22.818 real 0m0.181s 00:06:22.818 user 0m0.156s 00:06:22.818 sys 0m0.017s 00:06:22.818 06:10:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.818 06:10:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.818 ************************************ 00:06:22.818 END TEST rpc_trace_cmd_test 00:06:22.818 ************************************ 00:06:22.818 06:10:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:22.818 06:10:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:22.818 06:10:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:22.818 06:10:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.818 06:10:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.818 06:10:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.818 ************************************ 00:06:22.818 START TEST rpc_daemon_integrity 00:06:22.818 ************************************ 00:06:22.818 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:22.818 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:22.818 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.818 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.818 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.818 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:22.818 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:22.818 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:22.818 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:22.818 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.818 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.818 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.818 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:22.818 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:22.818 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.818 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.818 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.818 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:22.818 { 00:06:22.818 "name": "Malloc2", 00:06:22.818 "aliases": [ 00:06:22.818 "37920167-97ea-4766-af22-01982ee8af5c" 00:06:22.818 ], 00:06:22.818 "product_name": "Malloc disk", 00:06:22.818 "block_size": 512, 00:06:22.818 "num_blocks": 16384, 00:06:22.818 "uuid": "37920167-97ea-4766-af22-01982ee8af5c", 00:06:22.818 "assigned_rate_limits": { 00:06:22.818 "rw_ios_per_sec": 0, 00:06:22.818 "rw_mbytes_per_sec": 0, 00:06:22.818 "r_mbytes_per_sec": 0, 00:06:22.818 "w_mbytes_per_sec": 0 00:06:22.818 }, 00:06:22.818 "claimed": false, 00:06:22.818 "zoned": false, 00:06:22.818 "supported_io_types": { 00:06:22.818 "read": true, 00:06:22.818 "write": true, 00:06:22.818 "unmap": true, 00:06:22.818 "flush": true, 00:06:22.818 "reset": true, 00:06:22.818 "nvme_admin": false, 00:06:22.818 "nvme_io": false, 00:06:22.818 "nvme_io_md": false, 00:06:22.818 "write_zeroes": true, 00:06:22.818 "zcopy": true, 00:06:22.818 "get_zone_info": false, 00:06:22.818 "zone_management": false, 00:06:22.818 "zone_append": false, 00:06:22.818 "compare": false, 00:06:22.818 "compare_and_write": false, 00:06:22.818 "abort": true, 00:06:22.818 "seek_hole": false, 00:06:22.818 "seek_data": false, 00:06:22.818 "copy": true, 00:06:22.818 "nvme_iov_md": false 00:06:22.818 }, 00:06:22.818 "memory_domains": [ 00:06:22.818 { 00:06:22.818 "dma_device_id": "system", 00:06:22.819 "dma_device_type": 1 00:06:22.819 }, 00:06:22.819 { 00:06:22.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.819 "dma_device_type": 2 00:06:22.819 } 00:06:22.819 ], 00:06:22.819 "driver_specific": {} 00:06:22.819 } 00:06:22.819 ]' 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.819 [2024-12-08 06:10:12.859170] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:22.819 [2024-12-08 06:10:12.859209] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:22.819 [2024-12-08 06:10:12.859243] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7f7cb0 00:06:22.819 [2024-12-08 06:10:12.859263] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:22.819 [2024-12-08 06:10:12.860438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:22.819 [2024-12-08 06:10:12.860461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:22.819 Passthru0 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:22.819 { 00:06:22.819 "name": "Malloc2", 00:06:22.819 "aliases": [ 00:06:22.819 "37920167-97ea-4766-af22-01982ee8af5c" 00:06:22.819 ], 00:06:22.819 "product_name": "Malloc disk", 00:06:22.819 "block_size": 512, 00:06:22.819 "num_blocks": 16384, 00:06:22.819 "uuid": "37920167-97ea-4766-af22-01982ee8af5c", 00:06:22.819 "assigned_rate_limits": { 00:06:22.819 "rw_ios_per_sec": 0, 00:06:22.819 "rw_mbytes_per_sec": 0, 00:06:22.819 "r_mbytes_per_sec": 0, 00:06:22.819 "w_mbytes_per_sec": 0 00:06:22.819 }, 00:06:22.819 "claimed": true, 00:06:22.819 "claim_type": "exclusive_write", 00:06:22.819 "zoned": false, 00:06:22.819 "supported_io_types": { 00:06:22.819 "read": true, 00:06:22.819 "write": true, 00:06:22.819 "unmap": true, 00:06:22.819 "flush": true, 00:06:22.819 "reset": true, 00:06:22.819 "nvme_admin": false, 00:06:22.819 "nvme_io": false, 00:06:22.819 "nvme_io_md": false, 00:06:22.819 "write_zeroes": true, 00:06:22.819 "zcopy": true, 00:06:22.819 "get_zone_info": false, 00:06:22.819 "zone_management": false, 00:06:22.819 "zone_append": false, 00:06:22.819 "compare": false, 00:06:22.819 "compare_and_write": false, 00:06:22.819 "abort": true, 00:06:22.819 "seek_hole": false, 00:06:22.819 "seek_data": false, 00:06:22.819 "copy": true, 00:06:22.819 "nvme_iov_md": false 00:06:22.819 }, 00:06:22.819 "memory_domains": [ 00:06:22.819 { 00:06:22.819 "dma_device_id": "system", 00:06:22.819 "dma_device_type": 1 00:06:22.819 }, 00:06:22.819 { 00:06:22.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.819 "dma_device_type": 2 00:06:22.819 } 00:06:22.819 ], 00:06:22.819 "driver_specific": {} 00:06:22.819 }, 00:06:22.819 { 00:06:22.819 "name": "Passthru0", 00:06:22.819 "aliases": [ 00:06:22.819 "99f98251-4e9f-5065-92ba-665b33fbcc6f" 00:06:22.819 ], 00:06:22.819 "product_name": "passthru", 00:06:22.819 "block_size": 512, 00:06:22.819 "num_blocks": 16384, 00:06:22.819 "uuid": "99f98251-4e9f-5065-92ba-665b33fbcc6f", 00:06:22.819 "assigned_rate_limits": { 00:06:22.819 "rw_ios_per_sec": 0, 00:06:22.819 "rw_mbytes_per_sec": 0, 00:06:22.819 "r_mbytes_per_sec": 0, 00:06:22.819 "w_mbytes_per_sec": 0 00:06:22.819 }, 00:06:22.819 "claimed": false, 00:06:22.819 "zoned": false, 00:06:22.819 "supported_io_types": { 00:06:22.819 "read": true, 00:06:22.819 "write": true, 00:06:22.819 "unmap": true, 00:06:22.819 "flush": true, 00:06:22.819 "reset": true, 00:06:22.819 "nvme_admin": false, 00:06:22.819 "nvme_io": false, 00:06:22.819 "nvme_io_md": false, 00:06:22.819 "write_zeroes": true, 00:06:22.819 "zcopy": true, 00:06:22.819 "get_zone_info": false, 00:06:22.819 "zone_management": false, 00:06:22.819 "zone_append": false, 00:06:22.819 "compare": false, 00:06:22.819 "compare_and_write": false, 00:06:22.819 "abort": true, 00:06:22.819 "seek_hole": false, 00:06:22.819 "seek_data": false, 00:06:22.819 "copy": true, 00:06:22.819 "nvme_iov_md": false 00:06:22.819 }, 00:06:22.819 "memory_domains": [ 00:06:22.819 { 00:06:22.819 "dma_device_id": "system", 00:06:22.819 "dma_device_type": 1 00:06:22.819 }, 00:06:22.819 { 00:06:22.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.819 "dma_device_type": 2 00:06:22.819 } 00:06:22.819 ], 00:06:22.819 "driver_specific": { 00:06:22.819 "passthru": { 00:06:22.819 "name": "Passthru0", 00:06:22.819 "base_bdev_name": "Malloc2" 00:06:22.819 } 00:06:22.819 } 00:06:22.819 } 00:06:22.819 ]' 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:22.819 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:23.080 06:10:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:23.080 00:06:23.080 real 0m0.208s 00:06:23.080 user 0m0.133s 00:06:23.080 sys 0m0.021s 00:06:23.080 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.080 06:10:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.080 ************************************ 00:06:23.080 END TEST rpc_daemon_integrity 00:06:23.080 ************************************ 00:06:23.080 06:10:12 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:23.080 06:10:12 rpc -- rpc/rpc.sh@84 -- # killprocess 937988 00:06:23.080 06:10:12 rpc -- common/autotest_common.sh@954 -- # '[' -z 937988 ']' 00:06:23.080 06:10:12 rpc -- common/autotest_common.sh@958 -- # kill -0 937988 00:06:23.080 06:10:12 rpc -- common/autotest_common.sh@959 -- # uname 00:06:23.080 06:10:12 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.080 06:10:12 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 937988 00:06:23.080 06:10:13 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.080 06:10:13 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.080 06:10:13 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 937988' 00:06:23.080 killing process with pid 937988 00:06:23.080 06:10:13 rpc -- common/autotest_common.sh@973 -- # kill 937988 00:06:23.080 06:10:13 rpc -- common/autotest_common.sh@978 -- # wait 937988 00:06:23.340 00:06:23.340 real 0m1.915s 00:06:23.340 user 0m2.370s 00:06:23.340 sys 0m0.586s 00:06:23.340 06:10:13 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.340 06:10:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.340 ************************************ 00:06:23.340 END TEST rpc 00:06:23.340 ************************************ 00:06:23.598 06:10:13 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:23.598 06:10:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.598 06:10:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.598 06:10:13 -- common/autotest_common.sh@10 -- # set +x 00:06:23.598 ************************************ 00:06:23.598 START TEST skip_rpc 00:06:23.598 ************************************ 00:06:23.599 06:10:13 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:23.599 * Looking for test storage... 00:06:23.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:23.599 06:10:13 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:23.599 06:10:13 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:23.599 06:10:13 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:23.599 06:10:13 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.599 06:10:13 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:23.599 06:10:13 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.599 06:10:13 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:23.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.599 --rc genhtml_branch_coverage=1 00:06:23.599 --rc genhtml_function_coverage=1 00:06:23.599 --rc genhtml_legend=1 00:06:23.599 --rc geninfo_all_blocks=1 00:06:23.599 --rc geninfo_unexecuted_blocks=1 00:06:23.599 00:06:23.599 ' 00:06:23.599 06:10:13 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:23.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.599 --rc genhtml_branch_coverage=1 00:06:23.599 --rc genhtml_function_coverage=1 00:06:23.599 --rc genhtml_legend=1 00:06:23.599 --rc geninfo_all_blocks=1 00:06:23.599 --rc geninfo_unexecuted_blocks=1 00:06:23.599 00:06:23.599 ' 00:06:23.599 06:10:13 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:23.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.599 --rc genhtml_branch_coverage=1 00:06:23.599 --rc genhtml_function_coverage=1 00:06:23.599 --rc genhtml_legend=1 00:06:23.599 --rc geninfo_all_blocks=1 00:06:23.599 --rc geninfo_unexecuted_blocks=1 00:06:23.599 00:06:23.599 ' 00:06:23.599 06:10:13 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:23.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.599 --rc genhtml_branch_coverage=1 00:06:23.599 --rc genhtml_function_coverage=1 00:06:23.599 --rc genhtml_legend=1 00:06:23.599 --rc geninfo_all_blocks=1 00:06:23.599 --rc geninfo_unexecuted_blocks=1 00:06:23.599 00:06:23.599 ' 00:06:23.599 06:10:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:23.599 06:10:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:23.599 06:10:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:23.599 06:10:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.599 06:10:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.599 06:10:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.599 ************************************ 00:06:23.599 START TEST skip_rpc 00:06:23.599 ************************************ 00:06:23.599 06:10:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:23.599 06:10:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=938316 00:06:23.599 06:10:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:23.599 06:10:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.599 06:10:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:23.859 [2024-12-08 06:10:13.718991] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:06:23.859 [2024-12-08 06:10:13.719091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid938316 ] 00:06:23.859 [2024-12-08 06:10:13.785985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.859 [2024-12-08 06:10:13.843694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 938316 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 938316 ']' 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 938316 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 938316 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 938316' 00:06:29.130 killing process with pid 938316 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 938316 00:06:29.130 06:10:18 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 938316 00:06:29.130 00:06:29.130 real 0m5.457s 00:06:29.130 user 0m5.157s 00:06:29.130 sys 0m0.304s 00:06:29.130 06:10:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.130 06:10:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.130 ************************************ 00:06:29.130 END TEST skip_rpc 00:06:29.130 ************************************ 00:06:29.130 06:10:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:29.130 06:10:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.130 06:10:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.130 06:10:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.130 ************************************ 00:06:29.130 START TEST skip_rpc_with_json 00:06:29.130 ************************************ 00:06:29.130 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:29.130 06:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:29.130 06:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=939003 00:06:29.130 06:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.130 06:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.130 06:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 939003 00:06:29.130 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 939003 ']' 00:06:29.130 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.130 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.130 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.130 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.130 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:29.130 [2024-12-08 06:10:19.228216] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:06:29.130 [2024-12-08 06:10:19.228302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid939003 ] 00:06:29.391 [2024-12-08 06:10:19.295512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.391 [2024-12-08 06:10:19.354033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.652 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.652 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:29.652 06:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:29.652 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.652 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:29.652 [2024-12-08 06:10:19.623821] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:29.652 request: 00:06:29.652 { 00:06:29.652 "trtype": "tcp", 00:06:29.652 "method": "nvmf_get_transports", 00:06:29.652 "req_id": 1 00:06:29.652 } 00:06:29.652 Got JSON-RPC error response 00:06:29.652 response: 00:06:29.652 { 00:06:29.652 "code": -19, 00:06:29.652 "message": "No such device" 00:06:29.652 } 00:06:29.652 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:29.652 06:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:29.652 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.652 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:29.652 [2024-12-08 06:10:19.631928] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.652 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.652 06:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:29.652 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.652 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:29.913 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.913 06:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:29.913 { 00:06:29.913 "subsystems": [ 00:06:29.913 { 00:06:29.913 "subsystem": "fsdev", 00:06:29.913 "config": [ 00:06:29.913 { 00:06:29.913 "method": "fsdev_set_opts", 00:06:29.913 "params": { 00:06:29.913 "fsdev_io_pool_size": 65535, 00:06:29.913 "fsdev_io_cache_size": 256 00:06:29.913 } 00:06:29.913 } 00:06:29.913 ] 00:06:29.913 }, 00:06:29.913 { 00:06:29.913 "subsystem": "vfio_user_target", 00:06:29.913 "config": null 00:06:29.913 }, 00:06:29.913 { 00:06:29.913 "subsystem": "keyring", 00:06:29.913 "config": [] 00:06:29.913 }, 00:06:29.913 { 00:06:29.913 "subsystem": "iobuf", 00:06:29.913 "config": [ 00:06:29.913 { 00:06:29.913 "method": "iobuf_set_options", 00:06:29.913 "params": { 00:06:29.913 "small_pool_count": 8192, 00:06:29.913 "large_pool_count": 1024, 00:06:29.913 "small_bufsize": 8192, 00:06:29.913 "large_bufsize": 135168, 00:06:29.913 "enable_numa": false 00:06:29.913 } 00:06:29.913 } 00:06:29.913 ] 00:06:29.913 }, 00:06:29.913 { 00:06:29.913 "subsystem": "sock", 00:06:29.913 "config": [ 00:06:29.913 { 00:06:29.913 "method": "sock_set_default_impl", 00:06:29.913 "params": { 00:06:29.913 "impl_name": "posix" 00:06:29.913 } 00:06:29.913 }, 00:06:29.913 { 00:06:29.913 "method": "sock_impl_set_options", 00:06:29.913 "params": { 00:06:29.913 "impl_name": "ssl", 00:06:29.913 "recv_buf_size": 4096, 00:06:29.913 "send_buf_size": 4096, 00:06:29.913 "enable_recv_pipe": true, 00:06:29.913 "enable_quickack": false, 00:06:29.913 "enable_placement_id": 0, 00:06:29.913 "enable_zerocopy_send_server": true, 00:06:29.913 "enable_zerocopy_send_client": false, 00:06:29.913 "zerocopy_threshold": 0, 00:06:29.913 "tls_version": 0, 00:06:29.913 "enable_ktls": false 00:06:29.913 } 00:06:29.913 }, 00:06:29.913 { 00:06:29.913 "method": "sock_impl_set_options", 00:06:29.913 "params": { 00:06:29.913 "impl_name": "posix", 00:06:29.913 "recv_buf_size": 2097152, 00:06:29.913 "send_buf_size": 2097152, 00:06:29.913 "enable_recv_pipe": true, 00:06:29.913 "enable_quickack": false, 00:06:29.913 "enable_placement_id": 0, 00:06:29.913 "enable_zerocopy_send_server": true, 00:06:29.913 "enable_zerocopy_send_client": false, 00:06:29.913 "zerocopy_threshold": 0, 00:06:29.913 "tls_version": 0, 00:06:29.913 "enable_ktls": false 00:06:29.913 } 00:06:29.913 } 00:06:29.913 ] 00:06:29.913 }, 00:06:29.913 { 00:06:29.913 "subsystem": "vmd", 00:06:29.913 "config": [] 00:06:29.913 }, 00:06:29.913 { 00:06:29.913 "subsystem": "accel", 00:06:29.913 "config": [ 00:06:29.913 { 00:06:29.913 "method": "accel_set_options", 00:06:29.913 "params": { 00:06:29.913 "small_cache_size": 128, 00:06:29.913 "large_cache_size": 16, 00:06:29.913 "task_count": 2048, 00:06:29.913 "sequence_count": 2048, 00:06:29.913 "buf_count": 2048 00:06:29.913 } 00:06:29.913 } 00:06:29.913 ] 00:06:29.913 }, 00:06:29.913 { 00:06:29.913 "subsystem": "bdev", 00:06:29.913 "config": [ 00:06:29.913 { 00:06:29.913 "method": "bdev_set_options", 00:06:29.913 "params": { 00:06:29.913 "bdev_io_pool_size": 65535, 00:06:29.913 "bdev_io_cache_size": 256, 00:06:29.913 "bdev_auto_examine": true, 00:06:29.913 "iobuf_small_cache_size": 128, 00:06:29.913 "iobuf_large_cache_size": 16 00:06:29.913 } 00:06:29.913 }, 00:06:29.913 { 00:06:29.913 "method": "bdev_raid_set_options", 00:06:29.913 "params": { 00:06:29.913 "process_window_size_kb": 1024, 00:06:29.913 "process_max_bandwidth_mb_sec": 0 00:06:29.913 } 00:06:29.913 }, 00:06:29.913 { 00:06:29.913 "method": "bdev_iscsi_set_options", 00:06:29.913 "params": { 00:06:29.913 "timeout_sec": 30 00:06:29.913 } 00:06:29.913 }, 00:06:29.913 { 00:06:29.913 "method": "bdev_nvme_set_options", 00:06:29.913 "params": { 00:06:29.913 "action_on_timeout": "none", 00:06:29.913 "timeout_us": 0, 00:06:29.913 "timeout_admin_us": 0, 00:06:29.913 "keep_alive_timeout_ms": 10000, 00:06:29.913 "arbitration_burst": 0, 00:06:29.913 "low_priority_weight": 0, 00:06:29.913 "medium_priority_weight": 0, 00:06:29.913 "high_priority_weight": 0, 00:06:29.913 "nvme_adminq_poll_period_us": 10000, 00:06:29.913 "nvme_ioq_poll_period_us": 0, 00:06:29.913 "io_queue_requests": 0, 00:06:29.913 "delay_cmd_submit": true, 00:06:29.913 "transport_retry_count": 4, 00:06:29.913 "bdev_retry_count": 3, 00:06:29.913 "transport_ack_timeout": 0, 00:06:29.913 "ctrlr_loss_timeout_sec": 0, 00:06:29.913 "reconnect_delay_sec": 0, 00:06:29.913 "fast_io_fail_timeout_sec": 0, 00:06:29.913 "disable_auto_failback": false, 00:06:29.913 "generate_uuids": false, 00:06:29.913 "transport_tos": 0, 00:06:29.913 "nvme_error_stat": false, 00:06:29.913 "rdma_srq_size": 0, 00:06:29.913 "io_path_stat": false, 00:06:29.913 "allow_accel_sequence": false, 00:06:29.913 "rdma_max_cq_size": 0, 00:06:29.913 "rdma_cm_event_timeout_ms": 0, 00:06:29.913 "dhchap_digests": [ 00:06:29.913 "sha256", 00:06:29.913 "sha384", 00:06:29.913 "sha512" 00:06:29.913 ], 00:06:29.913 "dhchap_dhgroups": [ 00:06:29.913 "null", 00:06:29.913 "ffdhe2048", 00:06:29.913 "ffdhe3072", 00:06:29.913 "ffdhe4096", 00:06:29.913 "ffdhe6144", 00:06:29.913 "ffdhe8192" 00:06:29.913 ] 00:06:29.913 } 00:06:29.913 }, 00:06:29.913 { 00:06:29.913 "method": "bdev_nvme_set_hotplug", 00:06:29.913 "params": { 00:06:29.913 "period_us": 100000, 00:06:29.913 "enable": false 00:06:29.913 } 00:06:29.913 }, 00:06:29.913 { 00:06:29.913 "method": "bdev_wait_for_examine" 00:06:29.913 } 00:06:29.913 ] 00:06:29.913 }, 00:06:29.913 { 00:06:29.913 "subsystem": "scsi", 00:06:29.913 "config": null 00:06:29.913 }, 00:06:29.913 { 00:06:29.913 "subsystem": "scheduler", 00:06:29.913 "config": [ 00:06:29.913 { 00:06:29.913 "method": "framework_set_scheduler", 00:06:29.913 "params": { 00:06:29.913 "name": "static" 00:06:29.913 } 00:06:29.913 } 00:06:29.913 ] 00:06:29.913 }, 00:06:29.914 { 00:06:29.914 "subsystem": "vhost_scsi", 00:06:29.914 "config": [] 00:06:29.914 }, 00:06:29.914 { 00:06:29.914 "subsystem": "vhost_blk", 00:06:29.914 "config": [] 00:06:29.914 }, 00:06:29.914 { 00:06:29.914 "subsystem": "ublk", 00:06:29.914 "config": [] 00:06:29.914 }, 00:06:29.914 { 00:06:29.914 "subsystem": "nbd", 00:06:29.914 "config": [] 00:06:29.914 }, 00:06:29.914 { 00:06:29.914 "subsystem": "nvmf", 00:06:29.914 "config": [ 00:06:29.914 { 00:06:29.914 "method": "nvmf_set_config", 00:06:29.914 "params": { 00:06:29.914 "discovery_filter": "match_any", 00:06:29.914 "admin_cmd_passthru": { 00:06:29.914 "identify_ctrlr": false 00:06:29.914 }, 00:06:29.914 "dhchap_digests": [ 00:06:29.914 "sha256", 00:06:29.914 "sha384", 00:06:29.914 "sha512" 00:06:29.914 ], 00:06:29.914 "dhchap_dhgroups": [ 00:06:29.914 "null", 00:06:29.914 "ffdhe2048", 00:06:29.914 "ffdhe3072", 00:06:29.914 "ffdhe4096", 00:06:29.914 "ffdhe6144", 00:06:29.914 "ffdhe8192" 00:06:29.914 ] 00:06:29.914 } 00:06:29.914 }, 00:06:29.914 { 00:06:29.914 "method": "nvmf_set_max_subsystems", 00:06:29.914 "params": { 00:06:29.914 "max_subsystems": 1024 00:06:29.914 } 00:06:29.914 }, 00:06:29.914 { 00:06:29.914 "method": "nvmf_set_crdt", 00:06:29.914 "params": { 00:06:29.914 "crdt1": 0, 00:06:29.914 "crdt2": 0, 00:06:29.914 "crdt3": 0 00:06:29.914 } 00:06:29.914 }, 00:06:29.914 { 00:06:29.914 "method": "nvmf_create_transport", 00:06:29.914 "params": { 00:06:29.914 "trtype": "TCP", 00:06:29.914 "max_queue_depth": 128, 00:06:29.914 "max_io_qpairs_per_ctrlr": 127, 00:06:29.914 "in_capsule_data_size": 4096, 00:06:29.914 "max_io_size": 131072, 00:06:29.914 "io_unit_size": 131072, 00:06:29.914 "max_aq_depth": 128, 00:06:29.914 "num_shared_buffers": 511, 00:06:29.914 "buf_cache_size": 4294967295, 00:06:29.914 "dif_insert_or_strip": false, 00:06:29.914 "zcopy": false, 00:06:29.914 "c2h_success": true, 00:06:29.914 "sock_priority": 0, 00:06:29.914 "abort_timeout_sec": 1, 00:06:29.914 "ack_timeout": 0, 00:06:29.914 "data_wr_pool_size": 0 00:06:29.914 } 00:06:29.914 } 00:06:29.914 ] 00:06:29.914 }, 00:06:29.914 { 00:06:29.914 "subsystem": "iscsi", 00:06:29.914 "config": [ 00:06:29.914 { 00:06:29.914 "method": "iscsi_set_options", 00:06:29.914 "params": { 00:06:29.914 "node_base": "iqn.2016-06.io.spdk", 00:06:29.914 "max_sessions": 128, 00:06:29.914 "max_connections_per_session": 2, 00:06:29.914 "max_queue_depth": 64, 00:06:29.914 "default_time2wait": 2, 00:06:29.914 "default_time2retain": 20, 00:06:29.914 "first_burst_length": 8192, 00:06:29.914 "immediate_data": true, 00:06:29.914 "allow_duplicated_isid": false, 00:06:29.914 "error_recovery_level": 0, 00:06:29.914 "nop_timeout": 60, 00:06:29.914 "nop_in_interval": 30, 00:06:29.914 "disable_chap": false, 00:06:29.914 "require_chap": false, 00:06:29.914 "mutual_chap": false, 00:06:29.914 "chap_group": 0, 00:06:29.914 "max_large_datain_per_connection": 64, 00:06:29.914 "max_r2t_per_connection": 4, 00:06:29.914 "pdu_pool_size": 36864, 00:06:29.914 "immediate_data_pool_size": 16384, 00:06:29.914 "data_out_pool_size": 2048 00:06:29.914 } 00:06:29.914 } 00:06:29.914 ] 00:06:29.914 } 00:06:29.914 ] 00:06:29.914 } 00:06:29.914 06:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:29.914 06:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 939003 00:06:29.914 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 939003 ']' 00:06:29.914 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 939003 00:06:29.914 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:29.914 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.914 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 939003 00:06:29.914 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.914 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.914 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 939003' 00:06:29.914 killing process with pid 939003 00:06:29.914 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 939003 00:06:29.914 06:10:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 939003 00:06:30.172 06:10:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=939142 00:06:30.172 06:10:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:30.172 06:10:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:35.452 06:10:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 939142 00:06:35.452 06:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 939142 ']' 00:06:35.452 06:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 939142 00:06:35.452 06:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:35.452 06:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.452 06:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 939142 00:06:35.452 06:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.452 06:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.452 06:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 939142' 00:06:35.452 killing process with pid 939142 00:06:35.452 06:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 939142 00:06:35.452 06:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 939142 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:35.713 00:06:35.713 real 0m6.537s 00:06:35.713 user 0m6.190s 00:06:35.713 sys 0m0.666s 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:35.713 ************************************ 00:06:35.713 END TEST skip_rpc_with_json 00:06:35.713 ************************************ 00:06:35.713 06:10:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:35.713 06:10:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.713 06:10:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.713 06:10:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.713 ************************************ 00:06:35.713 START TEST skip_rpc_with_delay 00:06:35.713 ************************************ 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:35.713 [2024-12-08 06:10:25.815887] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:35.713 00:06:35.713 real 0m0.075s 00:06:35.713 user 0m0.048s 00:06:35.713 sys 0m0.027s 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.713 06:10:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:35.713 ************************************ 00:06:35.713 END TEST skip_rpc_with_delay 00:06:35.713 ************************************ 00:06:35.972 06:10:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:35.972 06:10:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:35.972 06:10:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:35.972 06:10:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.972 06:10:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.972 06:10:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.972 ************************************ 00:06:35.972 START TEST exit_on_failed_rpc_init 00:06:35.972 ************************************ 00:06:35.972 06:10:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:35.972 06:10:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=939863 00:06:35.972 06:10:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.972 06:10:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 939863 00:06:35.972 06:10:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 939863 ']' 00:06:35.972 06:10:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.972 06:10:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.972 06:10:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.972 06:10:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.972 06:10:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:35.972 [2024-12-08 06:10:25.938674] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:06:35.972 [2024-12-08 06:10:25.938798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid939863 ] 00:06:35.972 [2024-12-08 06:10:26.005374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.972 [2024-12-08 06:10:26.065416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.232 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.232 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:36.232 06:10:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:36.232 06:10:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:36.232 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:36.232 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:36.232 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.232 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.232 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.232 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.232 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.232 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.232 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.232 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:36.232 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:36.493 [2024-12-08 06:10:26.384307] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:06:36.493 [2024-12-08 06:10:26.384413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid939987 ] 00:06:36.493 [2024-12-08 06:10:26.450354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.493 [2024-12-08 06:10:26.509506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.493 [2024-12-08 06:10:26.509644] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:36.493 [2024-12-08 06:10:26.509664] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:36.493 [2024-12-08 06:10:26.509675] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.493 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:36.493 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.493 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:36.493 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:36.493 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:36.493 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.493 06:10:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:36.493 06:10:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 939863 00:06:36.493 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 939863 ']' 00:06:36.493 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 939863 00:06:36.493 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:36.493 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.493 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 939863 00:06:36.753 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.753 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.753 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 939863' 00:06:36.753 killing process with pid 939863 00:06:36.753 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 939863 00:06:36.753 06:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 939863 00:06:37.013 00:06:37.013 real 0m1.142s 00:06:37.013 user 0m1.289s 00:06:37.013 sys 0m0.401s 00:06:37.013 06:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.013 06:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:37.013 ************************************ 00:06:37.013 END TEST exit_on_failed_rpc_init 00:06:37.013 ************************************ 00:06:37.013 06:10:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:37.013 00:06:37.013 real 0m13.562s 00:06:37.013 user 0m12.858s 00:06:37.013 sys 0m1.595s 00:06:37.013 06:10:27 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.013 06:10:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.013 ************************************ 00:06:37.013 END TEST skip_rpc 00:06:37.013 ************************************ 00:06:37.013 06:10:27 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:37.013 06:10:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.013 06:10:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.013 06:10:27 -- common/autotest_common.sh@10 -- # set +x 00:06:37.013 ************************************ 00:06:37.013 START TEST rpc_client 00:06:37.013 ************************************ 00:06:37.013 06:10:27 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:37.272 * Looking for test storage... 00:06:37.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:37.272 06:10:27 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:37.272 06:10:27 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:37.272 06:10:27 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:37.272 06:10:27 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.272 06:10:27 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:37.272 06:10:27 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.272 06:10:27 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:37.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.272 --rc genhtml_branch_coverage=1 00:06:37.272 --rc genhtml_function_coverage=1 00:06:37.272 --rc genhtml_legend=1 00:06:37.272 --rc geninfo_all_blocks=1 00:06:37.272 --rc geninfo_unexecuted_blocks=1 00:06:37.272 00:06:37.272 ' 00:06:37.272 06:10:27 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:37.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.272 --rc genhtml_branch_coverage=1 00:06:37.272 --rc genhtml_function_coverage=1 00:06:37.272 --rc genhtml_legend=1 00:06:37.272 --rc geninfo_all_blocks=1 00:06:37.272 --rc geninfo_unexecuted_blocks=1 00:06:37.272 00:06:37.272 ' 00:06:37.272 06:10:27 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:37.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.272 --rc genhtml_branch_coverage=1 00:06:37.272 --rc genhtml_function_coverage=1 00:06:37.273 --rc genhtml_legend=1 00:06:37.273 --rc geninfo_all_blocks=1 00:06:37.273 --rc geninfo_unexecuted_blocks=1 00:06:37.273 00:06:37.273 ' 00:06:37.273 06:10:27 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:37.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.273 --rc genhtml_branch_coverage=1 00:06:37.273 --rc genhtml_function_coverage=1 00:06:37.273 --rc genhtml_legend=1 00:06:37.273 --rc geninfo_all_blocks=1 00:06:37.273 --rc geninfo_unexecuted_blocks=1 00:06:37.273 00:06:37.273 ' 00:06:37.273 06:10:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:37.273 OK 00:06:37.273 06:10:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:37.273 00:06:37.273 real 0m0.151s 00:06:37.273 user 0m0.098s 00:06:37.273 sys 0m0.062s 00:06:37.273 06:10:27 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.273 06:10:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:37.273 ************************************ 00:06:37.273 END TEST rpc_client 00:06:37.273 ************************************ 00:06:37.273 06:10:27 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:37.273 06:10:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.273 06:10:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.273 06:10:27 -- common/autotest_common.sh@10 -- # set +x 00:06:37.273 ************************************ 00:06:37.273 START TEST json_config 00:06:37.273 ************************************ 00:06:37.273 06:10:27 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:37.273 06:10:27 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:37.273 06:10:27 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:37.273 06:10:27 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:37.533 06:10:27 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:37.533 06:10:27 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.533 06:10:27 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.533 06:10:27 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.533 06:10:27 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.533 06:10:27 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.533 06:10:27 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.533 06:10:27 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.533 06:10:27 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.533 06:10:27 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.533 06:10:27 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.533 06:10:27 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.533 06:10:27 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:37.533 06:10:27 json_config -- scripts/common.sh@345 -- # : 1 00:06:37.533 06:10:27 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.533 06:10:27 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.533 06:10:27 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:37.533 06:10:27 json_config -- scripts/common.sh@353 -- # local d=1 00:06:37.533 06:10:27 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.533 06:10:27 json_config -- scripts/common.sh@355 -- # echo 1 00:06:37.533 06:10:27 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.533 06:10:27 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:37.533 06:10:27 json_config -- scripts/common.sh@353 -- # local d=2 00:06:37.533 06:10:27 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.533 06:10:27 json_config -- scripts/common.sh@355 -- # echo 2 00:06:37.533 06:10:27 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.533 06:10:27 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.533 06:10:27 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.533 06:10:27 json_config -- scripts/common.sh@368 -- # return 0 00:06:37.533 06:10:27 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.533 06:10:27 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:37.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.533 --rc genhtml_branch_coverage=1 00:06:37.533 --rc genhtml_function_coverage=1 00:06:37.533 --rc genhtml_legend=1 00:06:37.533 --rc geninfo_all_blocks=1 00:06:37.533 --rc geninfo_unexecuted_blocks=1 00:06:37.533 00:06:37.533 ' 00:06:37.533 06:10:27 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:37.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.533 --rc genhtml_branch_coverage=1 00:06:37.533 --rc genhtml_function_coverage=1 00:06:37.533 --rc genhtml_legend=1 00:06:37.533 --rc geninfo_all_blocks=1 00:06:37.533 --rc geninfo_unexecuted_blocks=1 00:06:37.533 00:06:37.533 ' 00:06:37.533 06:10:27 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:37.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.533 --rc genhtml_branch_coverage=1 00:06:37.533 --rc genhtml_function_coverage=1 00:06:37.533 --rc genhtml_legend=1 00:06:37.533 --rc geninfo_all_blocks=1 00:06:37.533 --rc geninfo_unexecuted_blocks=1 00:06:37.533 00:06:37.533 ' 00:06:37.533 06:10:27 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:37.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.533 --rc genhtml_branch_coverage=1 00:06:37.533 --rc genhtml_function_coverage=1 00:06:37.533 --rc genhtml_legend=1 00:06:37.533 --rc geninfo_all_blocks=1 00:06:37.533 --rc geninfo_unexecuted_blocks=1 00:06:37.533 00:06:37.533 ' 00:06:37.533 06:10:27 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.533 06:10:27 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.533 06:10:27 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.533 06:10:27 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.533 06:10:27 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.533 06:10:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.533 06:10:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.533 06:10:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.533 06:10:27 json_config -- paths/export.sh@5 -- # export PATH 00:06:37.533 06:10:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@51 -- # : 0 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.533 06:10:27 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.533 06:10:27 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:37.533 06:10:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:37.533 06:10:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:37.533 06:10:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:37.533 06:10:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:37.533 06:10:27 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:37.533 06:10:27 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:37.533 06:10:27 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:37.533 06:10:27 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:37.533 06:10:27 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:37.533 06:10:27 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:37.533 06:10:27 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:37.533 06:10:27 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:37.533 06:10:27 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:37.533 06:10:27 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:37.533 06:10:27 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:37.534 INFO: JSON configuration test init 00:06:37.534 06:10:27 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:37.534 06:10:27 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:37.534 06:10:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.534 06:10:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.534 06:10:27 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:37.534 06:10:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.534 06:10:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.534 06:10:27 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:37.534 06:10:27 json_config -- json_config/common.sh@9 -- # local app=target 00:06:37.534 06:10:27 json_config -- json_config/common.sh@10 -- # shift 00:06:37.534 06:10:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:37.534 06:10:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:37.534 06:10:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:37.534 06:10:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.534 06:10:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.534 06:10:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=940245 00:06:37.534 06:10:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:37.534 06:10:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:37.534 Waiting for target to run... 00:06:37.534 06:10:27 json_config -- json_config/common.sh@25 -- # waitforlisten 940245 /var/tmp/spdk_tgt.sock 00:06:37.534 06:10:27 json_config -- common/autotest_common.sh@835 -- # '[' -z 940245 ']' 00:06:37.534 06:10:27 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:37.534 06:10:27 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.534 06:10:27 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:37.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:37.534 06:10:27 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.534 06:10:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.534 [2024-12-08 06:10:27.503290] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:06:37.534 [2024-12-08 06:10:27.503359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid940245 ] 00:06:38.101 [2024-12-08 06:10:28.021968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.101 [2024-12-08 06:10:28.073243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.672 06:10:28 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.672 06:10:28 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:38.672 06:10:28 json_config -- json_config/common.sh@26 -- # echo '' 00:06:38.672 00:06:38.672 06:10:28 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:38.672 06:10:28 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:38.672 06:10:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.672 06:10:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.672 06:10:28 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:38.672 06:10:28 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:38.672 06:10:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:38.672 06:10:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.672 06:10:28 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:38.672 06:10:28 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:38.672 06:10:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:42.049 06:10:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:42.049 06:10:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:42.049 06:10:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@54 -- # sort 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:42.049 06:10:31 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:42.049 06:10:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:42.049 06:10:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:42.049 06:10:32 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:42.049 06:10:32 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:42.049 06:10:32 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:42.049 06:10:32 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:42.049 06:10:32 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:42.049 06:10:32 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:42.049 06:10:32 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:42.049 06:10:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:42.049 06:10:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:42.049 06:10:32 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:42.049 06:10:32 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:42.049 06:10:32 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:42.049 06:10:32 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:42.049 06:10:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:42.307 MallocForNvmf0 00:06:42.307 06:10:32 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:42.307 06:10:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:42.564 MallocForNvmf1 00:06:42.564 06:10:32 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:42.564 06:10:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:42.821 [2024-12-08 06:10:32.774992] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.821 06:10:32 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:42.821 06:10:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:43.079 06:10:33 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:43.079 06:10:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:43.336 06:10:33 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:43.336 06:10:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:43.594 06:10:33 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:43.594 06:10:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:43.854 [2024-12-08 06:10:33.830387] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:43.854 06:10:33 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:43.854 06:10:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.854 06:10:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.854 06:10:33 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:43.854 06:10:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.854 06:10:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.854 06:10:33 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:43.854 06:10:33 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:43.854 06:10:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:44.112 MallocBdevForConfigChangeCheck 00:06:44.112 06:10:34 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:44.112 06:10:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:44.112 06:10:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.112 06:10:34 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:44.112 06:10:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:44.684 06:10:34 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:44.684 INFO: shutting down applications... 00:06:44.684 06:10:34 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:44.684 06:10:34 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:44.684 06:10:34 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:44.684 06:10:34 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:46.593 Calling clear_iscsi_subsystem 00:06:46.593 Calling clear_nvmf_subsystem 00:06:46.593 Calling clear_nbd_subsystem 00:06:46.593 Calling clear_ublk_subsystem 00:06:46.593 Calling clear_vhost_blk_subsystem 00:06:46.593 Calling clear_vhost_scsi_subsystem 00:06:46.593 Calling clear_bdev_subsystem 00:06:46.593 06:10:36 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:46.593 06:10:36 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:46.593 06:10:36 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:46.593 06:10:36 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:46.593 06:10:36 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:46.593 06:10:36 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:46.593 06:10:36 json_config -- json_config/json_config.sh@352 -- # break 00:06:46.593 06:10:36 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:46.593 06:10:36 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:46.593 06:10:36 json_config -- json_config/common.sh@31 -- # local app=target 00:06:46.593 06:10:36 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:46.593 06:10:36 json_config -- json_config/common.sh@35 -- # [[ -n 940245 ]] 00:06:46.593 06:10:36 json_config -- json_config/common.sh@38 -- # kill -SIGINT 940245 00:06:46.593 06:10:36 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:46.593 06:10:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:46.593 06:10:36 json_config -- json_config/common.sh@41 -- # kill -0 940245 00:06:46.593 06:10:36 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:47.164 06:10:37 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:47.164 06:10:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:47.164 06:10:37 json_config -- json_config/common.sh@41 -- # kill -0 940245 00:06:47.164 06:10:37 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:47.164 06:10:37 json_config -- json_config/common.sh@43 -- # break 00:06:47.164 06:10:37 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:47.164 06:10:37 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:47.164 SPDK target shutdown done 00:06:47.164 06:10:37 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:47.164 INFO: relaunching applications... 00:06:47.164 06:10:37 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:47.164 06:10:37 json_config -- json_config/common.sh@9 -- # local app=target 00:06:47.164 06:10:37 json_config -- json_config/common.sh@10 -- # shift 00:06:47.164 06:10:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:47.164 06:10:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:47.164 06:10:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:47.164 06:10:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:47.164 06:10:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:47.164 06:10:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=941455 00:06:47.164 06:10:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:47.164 06:10:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:47.164 Waiting for target to run... 00:06:47.164 06:10:37 json_config -- json_config/common.sh@25 -- # waitforlisten 941455 /var/tmp/spdk_tgt.sock 00:06:47.164 06:10:37 json_config -- common/autotest_common.sh@835 -- # '[' -z 941455 ']' 00:06:47.164 06:10:37 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:47.164 06:10:37 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.164 06:10:37 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:47.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:47.164 06:10:37 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.164 06:10:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.164 [2024-12-08 06:10:37.219384] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:06:47.164 [2024-12-08 06:10:37.219463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid941455 ] 00:06:47.732 [2024-12-08 06:10:37.750191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.732 [2024-12-08 06:10:37.804076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.021 [2024-12-08 06:10:40.859482] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.021 [2024-12-08 06:10:40.892070] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:51.021 06:10:40 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.021 06:10:40 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:51.021 06:10:40 json_config -- json_config/common.sh@26 -- # echo '' 00:06:51.021 00:06:51.021 06:10:40 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:51.021 06:10:40 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:51.021 INFO: Checking if target configuration is the same... 00:06:51.021 06:10:40 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:51.021 06:10:40 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:51.021 06:10:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:51.021 + '[' 2 -ne 2 ']' 00:06:51.021 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:51.021 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:51.021 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:51.021 +++ basename /dev/fd/62 00:06:51.021 ++ mktemp /tmp/62.XXX 00:06:51.021 + tmp_file_1=/tmp/62.vMX 00:06:51.021 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:51.021 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:51.021 + tmp_file_2=/tmp/spdk_tgt_config.json.lsQ 00:06:51.021 + ret=0 00:06:51.021 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:51.278 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:51.278 + diff -u /tmp/62.vMX /tmp/spdk_tgt_config.json.lsQ 00:06:51.278 + echo 'INFO: JSON config files are the same' 00:06:51.278 INFO: JSON config files are the same 00:06:51.278 + rm /tmp/62.vMX /tmp/spdk_tgt_config.json.lsQ 00:06:51.278 + exit 0 00:06:51.278 06:10:41 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:51.278 06:10:41 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:51.278 INFO: changing configuration and checking if this can be detected... 00:06:51.278 06:10:41 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:51.278 06:10:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:51.536 06:10:41 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:51.536 06:10:41 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:51.536 06:10:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:51.536 + '[' 2 -ne 2 ']' 00:06:51.536 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:51.795 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:51.795 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:51.795 +++ basename /dev/fd/62 00:06:51.795 ++ mktemp /tmp/62.XXX 00:06:51.795 + tmp_file_1=/tmp/62.Mes 00:06:51.795 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:51.795 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:51.795 + tmp_file_2=/tmp/spdk_tgt_config.json.Tha 00:06:51.795 + ret=0 00:06:51.795 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:52.053 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:52.053 + diff -u /tmp/62.Mes /tmp/spdk_tgt_config.json.Tha 00:06:52.053 + ret=1 00:06:52.053 + echo '=== Start of file: /tmp/62.Mes ===' 00:06:52.053 + cat /tmp/62.Mes 00:06:52.053 + echo '=== End of file: /tmp/62.Mes ===' 00:06:52.053 + echo '' 00:06:52.053 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Tha ===' 00:06:52.053 + cat /tmp/spdk_tgt_config.json.Tha 00:06:52.053 + echo '=== End of file: /tmp/spdk_tgt_config.json.Tha ===' 00:06:52.053 + echo '' 00:06:52.053 + rm /tmp/62.Mes /tmp/spdk_tgt_config.json.Tha 00:06:52.053 + exit 1 00:06:52.053 06:10:42 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:52.053 INFO: configuration change detected. 00:06:52.053 06:10:42 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:52.053 06:10:42 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:52.053 06:10:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:52.053 06:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:52.053 06:10:42 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:52.053 06:10:42 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:52.053 06:10:42 json_config -- json_config/json_config.sh@324 -- # [[ -n 941455 ]] 00:06:52.053 06:10:42 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:52.053 06:10:42 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:52.053 06:10:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:52.053 06:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:52.053 06:10:42 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:52.053 06:10:42 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:52.053 06:10:42 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:52.053 06:10:42 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:52.053 06:10:42 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:52.053 06:10:42 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:52.054 06:10:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:52.054 06:10:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:52.054 06:10:42 json_config -- json_config/json_config.sh@330 -- # killprocess 941455 00:06:52.054 06:10:42 json_config -- common/autotest_common.sh@954 -- # '[' -z 941455 ']' 00:06:52.054 06:10:42 json_config -- common/autotest_common.sh@958 -- # kill -0 941455 00:06:52.054 06:10:42 json_config -- common/autotest_common.sh@959 -- # uname 00:06:52.054 06:10:42 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.054 06:10:42 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 941455 00:06:52.314 06:10:42 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.314 06:10:42 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.314 06:10:42 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 941455' 00:06:52.314 killing process with pid 941455 00:06:52.314 06:10:42 json_config -- common/autotest_common.sh@973 -- # kill 941455 00:06:52.314 06:10:42 json_config -- common/autotest_common.sh@978 -- # wait 941455 00:06:54.223 06:10:43 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:54.223 06:10:43 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:54.223 06:10:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:54.223 06:10:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.223 06:10:43 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:54.223 06:10:43 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:54.223 INFO: Success 00:06:54.223 00:06:54.223 real 0m16.597s 00:06:54.223 user 0m17.939s 00:06:54.223 sys 0m2.821s 00:06:54.223 06:10:43 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.223 06:10:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.223 ************************************ 00:06:54.223 END TEST json_config 00:06:54.223 ************************************ 00:06:54.223 06:10:43 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:54.223 06:10:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.223 06:10:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.223 06:10:43 -- common/autotest_common.sh@10 -- # set +x 00:06:54.223 ************************************ 00:06:54.223 START TEST json_config_extra_key 00:06:54.223 ************************************ 00:06:54.223 06:10:43 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:54.223 06:10:43 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:54.223 06:10:43 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:54.223 06:10:43 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:54.223 06:10:44 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:54.223 06:10:44 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.223 06:10:44 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.223 06:10:44 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.223 06:10:44 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.223 06:10:44 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.223 06:10:44 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.223 06:10:44 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.223 06:10:44 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.223 06:10:44 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.223 06:10:44 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.223 06:10:44 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.223 06:10:44 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:54.224 06:10:44 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.224 06:10:44 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:54.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.224 --rc genhtml_branch_coverage=1 00:06:54.224 --rc genhtml_function_coverage=1 00:06:54.224 --rc genhtml_legend=1 00:06:54.224 --rc geninfo_all_blocks=1 00:06:54.224 --rc geninfo_unexecuted_blocks=1 00:06:54.224 00:06:54.224 ' 00:06:54.224 06:10:44 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:54.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.224 --rc genhtml_branch_coverage=1 00:06:54.224 --rc genhtml_function_coverage=1 00:06:54.224 --rc genhtml_legend=1 00:06:54.224 --rc geninfo_all_blocks=1 00:06:54.224 --rc geninfo_unexecuted_blocks=1 00:06:54.224 00:06:54.224 ' 00:06:54.224 06:10:44 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:54.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.224 --rc genhtml_branch_coverage=1 00:06:54.224 --rc genhtml_function_coverage=1 00:06:54.224 --rc genhtml_legend=1 00:06:54.224 --rc geninfo_all_blocks=1 00:06:54.224 --rc geninfo_unexecuted_blocks=1 00:06:54.224 00:06:54.224 ' 00:06:54.224 06:10:44 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:54.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.224 --rc genhtml_branch_coverage=1 00:06:54.224 --rc genhtml_function_coverage=1 00:06:54.224 --rc genhtml_legend=1 00:06:54.224 --rc geninfo_all_blocks=1 00:06:54.224 --rc geninfo_unexecuted_blocks=1 00:06:54.224 00:06:54.224 ' 00:06:54.224 06:10:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.224 06:10:44 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.224 06:10:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.224 06:10:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.224 06:10:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.224 06:10:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:54.224 06:10:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:54.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:54.224 06:10:44 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:54.224 06:10:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:54.224 06:10:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:54.224 06:10:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:54.224 06:10:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:54.224 06:10:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:54.224 06:10:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:54.224 06:10:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:54.224 06:10:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:54.224 06:10:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:54.224 06:10:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:54.224 06:10:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:54.224 INFO: launching applications... 00:06:54.224 06:10:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:54.224 06:10:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:54.224 06:10:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:54.224 06:10:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:54.224 06:10:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:54.224 06:10:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:54.224 06:10:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:54.224 06:10:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:54.224 06:10:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=942380 00:06:54.224 06:10:44 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:54.224 06:10:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:54.224 Waiting for target to run... 00:06:54.224 06:10:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 942380 /var/tmp/spdk_tgt.sock 00:06:54.224 06:10:44 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 942380 ']' 00:06:54.224 06:10:44 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:54.224 06:10:44 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.224 06:10:44 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:54.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:54.225 06:10:44 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.225 06:10:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:54.225 [2024-12-08 06:10:44.143599] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:06:54.225 [2024-12-08 06:10:44.143711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942380 ] 00:06:54.484 [2024-12-08 06:10:44.489021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.484 [2024-12-08 06:10:44.529832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.056 06:10:45 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.056 06:10:45 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:55.056 06:10:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:55.056 00:06:55.056 06:10:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:55.056 INFO: shutting down applications... 00:06:55.056 06:10:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:55.056 06:10:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:55.056 06:10:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:55.056 06:10:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 942380 ]] 00:06:55.056 06:10:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 942380 00:06:55.056 06:10:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:55.056 06:10:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:55.056 06:10:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 942380 00:06:55.056 06:10:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:55.634 06:10:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:55.634 06:10:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:55.634 06:10:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 942380 00:06:55.634 06:10:45 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:55.634 06:10:45 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:55.634 06:10:45 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:55.634 06:10:45 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:55.634 SPDK target shutdown done 00:06:55.634 06:10:45 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:55.634 Success 00:06:55.634 00:06:55.634 real 0m1.697s 00:06:55.634 user 0m1.685s 00:06:55.634 sys 0m0.462s 00:06:55.634 06:10:45 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.634 06:10:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:55.634 ************************************ 00:06:55.634 END TEST json_config_extra_key 00:06:55.634 ************************************ 00:06:55.634 06:10:45 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:55.634 06:10:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.634 06:10:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.634 06:10:45 -- common/autotest_common.sh@10 -- # set +x 00:06:55.634 ************************************ 00:06:55.634 START TEST alias_rpc 00:06:55.634 ************************************ 00:06:55.634 06:10:45 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:55.634 * Looking for test storage... 00:06:55.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:55.634 06:10:45 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:55.893 06:10:45 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:55.893 06:10:45 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:55.893 06:10:45 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.893 06:10:45 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:55.893 06:10:45 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.893 06:10:45 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:55.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.893 --rc genhtml_branch_coverage=1 00:06:55.893 --rc genhtml_function_coverage=1 00:06:55.893 --rc genhtml_legend=1 00:06:55.893 --rc geninfo_all_blocks=1 00:06:55.893 --rc geninfo_unexecuted_blocks=1 00:06:55.893 00:06:55.893 ' 00:06:55.893 06:10:45 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:55.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.893 --rc genhtml_branch_coverage=1 00:06:55.893 --rc genhtml_function_coverage=1 00:06:55.893 --rc genhtml_legend=1 00:06:55.893 --rc geninfo_all_blocks=1 00:06:55.893 --rc geninfo_unexecuted_blocks=1 00:06:55.893 00:06:55.893 ' 00:06:55.893 06:10:45 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:55.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.893 --rc genhtml_branch_coverage=1 00:06:55.893 --rc genhtml_function_coverage=1 00:06:55.893 --rc genhtml_legend=1 00:06:55.893 --rc geninfo_all_blocks=1 00:06:55.893 --rc geninfo_unexecuted_blocks=1 00:06:55.893 00:06:55.893 ' 00:06:55.893 06:10:45 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:55.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.893 --rc genhtml_branch_coverage=1 00:06:55.893 --rc genhtml_function_coverage=1 00:06:55.893 --rc genhtml_legend=1 00:06:55.893 --rc geninfo_all_blocks=1 00:06:55.893 --rc geninfo_unexecuted_blocks=1 00:06:55.893 00:06:55.893 ' 00:06:55.893 06:10:45 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:55.893 06:10:45 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=942691 00:06:55.893 06:10:45 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:55.893 06:10:45 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 942691 00:06:55.893 06:10:45 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 942691 ']' 00:06:55.893 06:10:45 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.893 06:10:45 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.893 06:10:45 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.893 06:10:45 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.893 06:10:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.893 [2024-12-08 06:10:45.901329] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:06:55.893 [2024-12-08 06:10:45.901435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942691 ] 00:06:55.893 [2024-12-08 06:10:45.969192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.152 [2024-12-08 06:10:46.026573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.411 06:10:46 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.411 06:10:46 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:56.411 06:10:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:56.672 06:10:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 942691 00:06:56.672 06:10:46 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 942691 ']' 00:06:56.672 06:10:46 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 942691 00:06:56.672 06:10:46 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:56.672 06:10:46 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.672 06:10:46 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 942691 00:06:56.672 06:10:46 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.672 06:10:46 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.672 06:10:46 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 942691' 00:06:56.672 killing process with pid 942691 00:06:56.672 06:10:46 alias_rpc -- common/autotest_common.sh@973 -- # kill 942691 00:06:56.672 06:10:46 alias_rpc -- common/autotest_common.sh@978 -- # wait 942691 00:06:56.930 00:06:56.930 real 0m1.346s 00:06:56.930 user 0m1.482s 00:06:56.930 sys 0m0.431s 00:06:56.930 06:10:47 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.930 06:10:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.930 ************************************ 00:06:56.930 END TEST alias_rpc 00:06:56.930 ************************************ 00:06:57.187 06:10:47 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:57.187 06:10:47 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:57.187 06:10:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.187 06:10:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.187 06:10:47 -- common/autotest_common.sh@10 -- # set +x 00:06:57.187 ************************************ 00:06:57.187 START TEST spdkcli_tcp 00:06:57.187 ************************************ 00:06:57.187 06:10:47 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:57.187 * Looking for test storage... 00:06:57.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:57.187 06:10:47 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:57.187 06:10:47 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:57.187 06:10:47 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:57.187 06:10:47 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.187 06:10:47 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:57.187 06:10:47 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.187 06:10:47 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:57.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.187 --rc genhtml_branch_coverage=1 00:06:57.187 --rc genhtml_function_coverage=1 00:06:57.187 --rc genhtml_legend=1 00:06:57.187 --rc geninfo_all_blocks=1 00:06:57.187 --rc geninfo_unexecuted_blocks=1 00:06:57.187 00:06:57.187 ' 00:06:57.187 06:10:47 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:57.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.187 --rc genhtml_branch_coverage=1 00:06:57.187 --rc genhtml_function_coverage=1 00:06:57.187 --rc genhtml_legend=1 00:06:57.187 --rc geninfo_all_blocks=1 00:06:57.187 --rc geninfo_unexecuted_blocks=1 00:06:57.187 00:06:57.187 ' 00:06:57.187 06:10:47 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:57.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.187 --rc genhtml_branch_coverage=1 00:06:57.187 --rc genhtml_function_coverage=1 00:06:57.187 --rc genhtml_legend=1 00:06:57.187 --rc geninfo_all_blocks=1 00:06:57.187 --rc geninfo_unexecuted_blocks=1 00:06:57.187 00:06:57.187 ' 00:06:57.187 06:10:47 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:57.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.187 --rc genhtml_branch_coverage=1 00:06:57.187 --rc genhtml_function_coverage=1 00:06:57.187 --rc genhtml_legend=1 00:06:57.187 --rc geninfo_all_blocks=1 00:06:57.187 --rc geninfo_unexecuted_blocks=1 00:06:57.187 00:06:57.187 ' 00:06:57.187 06:10:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:57.187 06:10:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:57.187 06:10:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:57.187 06:10:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:57.187 06:10:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:57.187 06:10:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:57.187 06:10:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:57.187 06:10:47 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:57.187 06:10:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.187 06:10:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=942892 00:06:57.187 06:10:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:57.187 06:10:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 942892 00:06:57.187 06:10:47 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 942892 ']' 00:06:57.187 06:10:47 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.187 06:10:47 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.187 06:10:47 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.188 06:10:47 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.188 06:10:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.188 [2024-12-08 06:10:47.302213] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:06:57.188 [2024-12-08 06:10:47.302291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942892 ] 00:06:57.445 [2024-12-08 06:10:47.369269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.445 [2024-12-08 06:10:47.425637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.445 [2024-12-08 06:10:47.425641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.703 06:10:47 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.703 06:10:47 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:57.703 06:10:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=943018 00:06:57.703 06:10:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:57.703 06:10:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:57.963 [ 00:06:57.963 "bdev_malloc_delete", 00:06:57.963 "bdev_malloc_create", 00:06:57.963 "bdev_null_resize", 00:06:57.963 "bdev_null_delete", 00:06:57.963 "bdev_null_create", 00:06:57.963 "bdev_nvme_cuse_unregister", 00:06:57.963 "bdev_nvme_cuse_register", 00:06:57.963 "bdev_opal_new_user", 00:06:57.963 "bdev_opal_set_lock_state", 00:06:57.963 "bdev_opal_delete", 00:06:57.963 "bdev_opal_get_info", 00:06:57.963 "bdev_opal_create", 00:06:57.963 "bdev_nvme_opal_revert", 00:06:57.963 "bdev_nvme_opal_init", 00:06:57.963 "bdev_nvme_send_cmd", 00:06:57.963 "bdev_nvme_set_keys", 00:06:57.963 "bdev_nvme_get_path_iostat", 00:06:57.963 "bdev_nvme_get_mdns_discovery_info", 00:06:57.963 "bdev_nvme_stop_mdns_discovery", 00:06:57.963 "bdev_nvme_start_mdns_discovery", 00:06:57.963 "bdev_nvme_set_multipath_policy", 00:06:57.963 "bdev_nvme_set_preferred_path", 00:06:57.963 "bdev_nvme_get_io_paths", 00:06:57.963 "bdev_nvme_remove_error_injection", 00:06:57.963 "bdev_nvme_add_error_injection", 00:06:57.963 "bdev_nvme_get_discovery_info", 00:06:57.963 "bdev_nvme_stop_discovery", 00:06:57.963 "bdev_nvme_start_discovery", 00:06:57.963 "bdev_nvme_get_controller_health_info", 00:06:57.963 "bdev_nvme_disable_controller", 00:06:57.963 "bdev_nvme_enable_controller", 00:06:57.963 "bdev_nvme_reset_controller", 00:06:57.963 "bdev_nvme_get_transport_statistics", 00:06:57.963 "bdev_nvme_apply_firmware", 00:06:57.963 "bdev_nvme_detach_controller", 00:06:57.963 "bdev_nvme_get_controllers", 00:06:57.963 "bdev_nvme_attach_controller", 00:06:57.963 "bdev_nvme_set_hotplug", 00:06:57.963 "bdev_nvme_set_options", 00:06:57.963 "bdev_passthru_delete", 00:06:57.963 "bdev_passthru_create", 00:06:57.963 "bdev_lvol_set_parent_bdev", 00:06:57.963 "bdev_lvol_set_parent", 00:06:57.963 "bdev_lvol_check_shallow_copy", 00:06:57.963 "bdev_lvol_start_shallow_copy", 00:06:57.963 "bdev_lvol_grow_lvstore", 00:06:57.963 "bdev_lvol_get_lvols", 00:06:57.963 "bdev_lvol_get_lvstores", 00:06:57.963 "bdev_lvol_delete", 00:06:57.963 "bdev_lvol_set_read_only", 00:06:57.963 "bdev_lvol_resize", 00:06:57.963 "bdev_lvol_decouple_parent", 00:06:57.963 "bdev_lvol_inflate", 00:06:57.963 "bdev_lvol_rename", 00:06:57.963 "bdev_lvol_clone_bdev", 00:06:57.963 "bdev_lvol_clone", 00:06:57.963 "bdev_lvol_snapshot", 00:06:57.963 "bdev_lvol_create", 00:06:57.963 "bdev_lvol_delete_lvstore", 00:06:57.963 "bdev_lvol_rename_lvstore", 00:06:57.963 "bdev_lvol_create_lvstore", 00:06:57.963 "bdev_raid_set_options", 00:06:57.963 "bdev_raid_remove_base_bdev", 00:06:57.963 "bdev_raid_add_base_bdev", 00:06:57.963 "bdev_raid_delete", 00:06:57.963 "bdev_raid_create", 00:06:57.964 "bdev_raid_get_bdevs", 00:06:57.964 "bdev_error_inject_error", 00:06:57.964 "bdev_error_delete", 00:06:57.964 "bdev_error_create", 00:06:57.964 "bdev_split_delete", 00:06:57.964 "bdev_split_create", 00:06:57.964 "bdev_delay_delete", 00:06:57.964 "bdev_delay_create", 00:06:57.964 "bdev_delay_update_latency", 00:06:57.964 "bdev_zone_block_delete", 00:06:57.964 "bdev_zone_block_create", 00:06:57.964 "blobfs_create", 00:06:57.964 "blobfs_detect", 00:06:57.964 "blobfs_set_cache_size", 00:06:57.964 "bdev_aio_delete", 00:06:57.964 "bdev_aio_rescan", 00:06:57.964 "bdev_aio_create", 00:06:57.964 "bdev_ftl_set_property", 00:06:57.964 "bdev_ftl_get_properties", 00:06:57.964 "bdev_ftl_get_stats", 00:06:57.964 "bdev_ftl_unmap", 00:06:57.964 "bdev_ftl_unload", 00:06:57.964 "bdev_ftl_delete", 00:06:57.964 "bdev_ftl_load", 00:06:57.964 "bdev_ftl_create", 00:06:57.964 "bdev_virtio_attach_controller", 00:06:57.964 "bdev_virtio_scsi_get_devices", 00:06:57.964 "bdev_virtio_detach_controller", 00:06:57.964 "bdev_virtio_blk_set_hotplug", 00:06:57.964 "bdev_iscsi_delete", 00:06:57.964 "bdev_iscsi_create", 00:06:57.964 "bdev_iscsi_set_options", 00:06:57.964 "accel_error_inject_error", 00:06:57.964 "ioat_scan_accel_module", 00:06:57.964 "dsa_scan_accel_module", 00:06:57.964 "iaa_scan_accel_module", 00:06:57.964 "vfu_virtio_create_fs_endpoint", 00:06:57.964 "vfu_virtio_create_scsi_endpoint", 00:06:57.964 "vfu_virtio_scsi_remove_target", 00:06:57.964 "vfu_virtio_scsi_add_target", 00:06:57.964 "vfu_virtio_create_blk_endpoint", 00:06:57.964 "vfu_virtio_delete_endpoint", 00:06:57.964 "keyring_file_remove_key", 00:06:57.964 "keyring_file_add_key", 00:06:57.964 "keyring_linux_set_options", 00:06:57.964 "fsdev_aio_delete", 00:06:57.964 "fsdev_aio_create", 00:06:57.964 "iscsi_get_histogram", 00:06:57.964 "iscsi_enable_histogram", 00:06:57.964 "iscsi_set_options", 00:06:57.964 "iscsi_get_auth_groups", 00:06:57.964 "iscsi_auth_group_remove_secret", 00:06:57.964 "iscsi_auth_group_add_secret", 00:06:57.964 "iscsi_delete_auth_group", 00:06:57.964 "iscsi_create_auth_group", 00:06:57.964 "iscsi_set_discovery_auth", 00:06:57.964 "iscsi_get_options", 00:06:57.964 "iscsi_target_node_request_logout", 00:06:57.964 "iscsi_target_node_set_redirect", 00:06:57.964 "iscsi_target_node_set_auth", 00:06:57.964 "iscsi_target_node_add_lun", 00:06:57.964 "iscsi_get_stats", 00:06:57.964 "iscsi_get_connections", 00:06:57.964 "iscsi_portal_group_set_auth", 00:06:57.964 "iscsi_start_portal_group", 00:06:57.964 "iscsi_delete_portal_group", 00:06:57.964 "iscsi_create_portal_group", 00:06:57.964 "iscsi_get_portal_groups", 00:06:57.964 "iscsi_delete_target_node", 00:06:57.964 "iscsi_target_node_remove_pg_ig_maps", 00:06:57.964 "iscsi_target_node_add_pg_ig_maps", 00:06:57.964 "iscsi_create_target_node", 00:06:57.964 "iscsi_get_target_nodes", 00:06:57.964 "iscsi_delete_initiator_group", 00:06:57.964 "iscsi_initiator_group_remove_initiators", 00:06:57.964 "iscsi_initiator_group_add_initiators", 00:06:57.964 "iscsi_create_initiator_group", 00:06:57.964 "iscsi_get_initiator_groups", 00:06:57.964 "nvmf_set_crdt", 00:06:57.964 "nvmf_set_config", 00:06:57.964 "nvmf_set_max_subsystems", 00:06:57.964 "nvmf_stop_mdns_prr", 00:06:57.964 "nvmf_publish_mdns_prr", 00:06:57.964 "nvmf_subsystem_get_listeners", 00:06:57.964 "nvmf_subsystem_get_qpairs", 00:06:57.964 "nvmf_subsystem_get_controllers", 00:06:57.964 "nvmf_get_stats", 00:06:57.964 "nvmf_get_transports", 00:06:57.964 "nvmf_create_transport", 00:06:57.964 "nvmf_get_targets", 00:06:57.964 "nvmf_delete_target", 00:06:57.964 "nvmf_create_target", 00:06:57.964 "nvmf_subsystem_allow_any_host", 00:06:57.964 "nvmf_subsystem_set_keys", 00:06:57.964 "nvmf_subsystem_remove_host", 00:06:57.964 "nvmf_subsystem_add_host", 00:06:57.964 "nvmf_ns_remove_host", 00:06:57.964 "nvmf_ns_add_host", 00:06:57.964 "nvmf_subsystem_remove_ns", 00:06:57.964 "nvmf_subsystem_set_ns_ana_group", 00:06:57.964 "nvmf_subsystem_add_ns", 00:06:57.964 "nvmf_subsystem_listener_set_ana_state", 00:06:57.964 "nvmf_discovery_get_referrals", 00:06:57.964 "nvmf_discovery_remove_referral", 00:06:57.964 "nvmf_discovery_add_referral", 00:06:57.964 "nvmf_subsystem_remove_listener", 00:06:57.964 "nvmf_subsystem_add_listener", 00:06:57.964 "nvmf_delete_subsystem", 00:06:57.964 "nvmf_create_subsystem", 00:06:57.964 "nvmf_get_subsystems", 00:06:57.964 "env_dpdk_get_mem_stats", 00:06:57.964 "nbd_get_disks", 00:06:57.964 "nbd_stop_disk", 00:06:57.964 "nbd_start_disk", 00:06:57.964 "ublk_recover_disk", 00:06:57.964 "ublk_get_disks", 00:06:57.964 "ublk_stop_disk", 00:06:57.964 "ublk_start_disk", 00:06:57.964 "ublk_destroy_target", 00:06:57.964 "ublk_create_target", 00:06:57.964 "virtio_blk_create_transport", 00:06:57.964 "virtio_blk_get_transports", 00:06:57.964 "vhost_controller_set_coalescing", 00:06:57.964 "vhost_get_controllers", 00:06:57.964 "vhost_delete_controller", 00:06:57.964 "vhost_create_blk_controller", 00:06:57.964 "vhost_scsi_controller_remove_target", 00:06:57.964 "vhost_scsi_controller_add_target", 00:06:57.964 "vhost_start_scsi_controller", 00:06:57.964 "vhost_create_scsi_controller", 00:06:57.964 "thread_set_cpumask", 00:06:57.964 "scheduler_set_options", 00:06:57.964 "framework_get_governor", 00:06:57.964 "framework_get_scheduler", 00:06:57.964 "framework_set_scheduler", 00:06:57.964 "framework_get_reactors", 00:06:57.964 "thread_get_io_channels", 00:06:57.964 "thread_get_pollers", 00:06:57.964 "thread_get_stats", 00:06:57.964 "framework_monitor_context_switch", 00:06:57.964 "spdk_kill_instance", 00:06:57.964 "log_enable_timestamps", 00:06:57.964 "log_get_flags", 00:06:57.964 "log_clear_flag", 00:06:57.964 "log_set_flag", 00:06:57.964 "log_get_level", 00:06:57.964 "log_set_level", 00:06:57.964 "log_get_print_level", 00:06:57.964 "log_set_print_level", 00:06:57.964 "framework_enable_cpumask_locks", 00:06:57.964 "framework_disable_cpumask_locks", 00:06:57.964 "framework_wait_init", 00:06:57.964 "framework_start_init", 00:06:57.964 "scsi_get_devices", 00:06:57.964 "bdev_get_histogram", 00:06:57.964 "bdev_enable_histogram", 00:06:57.964 "bdev_set_qos_limit", 00:06:57.964 "bdev_set_qd_sampling_period", 00:06:57.964 "bdev_get_bdevs", 00:06:57.964 "bdev_reset_iostat", 00:06:57.964 "bdev_get_iostat", 00:06:57.964 "bdev_examine", 00:06:57.964 "bdev_wait_for_examine", 00:06:57.964 "bdev_set_options", 00:06:57.964 "accel_get_stats", 00:06:57.964 "accel_set_options", 00:06:57.964 "accel_set_driver", 00:06:57.964 "accel_crypto_key_destroy", 00:06:57.964 "accel_crypto_keys_get", 00:06:57.964 "accel_crypto_key_create", 00:06:57.964 "accel_assign_opc", 00:06:57.964 "accel_get_module_info", 00:06:57.964 "accel_get_opc_assignments", 00:06:57.964 "vmd_rescan", 00:06:57.964 "vmd_remove_device", 00:06:57.964 "vmd_enable", 00:06:57.964 "sock_get_default_impl", 00:06:57.964 "sock_set_default_impl", 00:06:57.964 "sock_impl_set_options", 00:06:57.964 "sock_impl_get_options", 00:06:57.964 "iobuf_get_stats", 00:06:57.964 "iobuf_set_options", 00:06:57.964 "keyring_get_keys", 00:06:57.964 "vfu_tgt_set_base_path", 00:06:57.964 "framework_get_pci_devices", 00:06:57.964 "framework_get_config", 00:06:57.964 "framework_get_subsystems", 00:06:57.964 "fsdev_set_opts", 00:06:57.964 "fsdev_get_opts", 00:06:57.964 "trace_get_info", 00:06:57.964 "trace_get_tpoint_group_mask", 00:06:57.964 "trace_disable_tpoint_group", 00:06:57.964 "trace_enable_tpoint_group", 00:06:57.964 "trace_clear_tpoint_mask", 00:06:57.964 "trace_set_tpoint_mask", 00:06:57.964 "notify_get_notifications", 00:06:57.964 "notify_get_types", 00:06:57.964 "spdk_get_version", 00:06:57.964 "rpc_get_methods" 00:06:57.964 ] 00:06:57.964 06:10:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:57.964 06:10:47 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:57.964 06:10:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.964 06:10:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:57.964 06:10:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 942892 00:06:57.964 06:10:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 942892 ']' 00:06:57.964 06:10:47 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 942892 00:06:57.964 06:10:47 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:57.964 06:10:47 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.964 06:10:47 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 942892 00:06:57.964 06:10:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.964 06:10:48 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.964 06:10:48 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 942892' 00:06:57.964 killing process with pid 942892 00:06:57.964 06:10:48 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 942892 00:06:57.964 06:10:48 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 942892 00:06:58.532 00:06:58.532 real 0m1.357s 00:06:58.532 user 0m2.430s 00:06:58.532 sys 0m0.452s 00:06:58.532 06:10:48 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.532 06:10:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.532 ************************************ 00:06:58.532 END TEST spdkcli_tcp 00:06:58.532 ************************************ 00:06:58.532 06:10:48 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:58.532 06:10:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.532 06:10:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.532 06:10:48 -- common/autotest_common.sh@10 -- # set +x 00:06:58.532 ************************************ 00:06:58.532 START TEST dpdk_mem_utility 00:06:58.532 ************************************ 00:06:58.532 06:10:48 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:58.532 * Looking for test storage... 00:06:58.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:58.532 06:10:48 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:58.532 06:10:48 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:58.533 06:10:48 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:58.533 06:10:48 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.533 06:10:48 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:58.533 06:10:48 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.533 06:10:48 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:58.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.533 --rc genhtml_branch_coverage=1 00:06:58.533 --rc genhtml_function_coverage=1 00:06:58.533 --rc genhtml_legend=1 00:06:58.533 --rc geninfo_all_blocks=1 00:06:58.533 --rc geninfo_unexecuted_blocks=1 00:06:58.533 00:06:58.533 ' 00:06:58.533 06:10:48 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:58.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.533 --rc genhtml_branch_coverage=1 00:06:58.533 --rc genhtml_function_coverage=1 00:06:58.533 --rc genhtml_legend=1 00:06:58.533 --rc geninfo_all_blocks=1 00:06:58.533 --rc geninfo_unexecuted_blocks=1 00:06:58.533 00:06:58.533 ' 00:06:58.533 06:10:48 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:58.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.533 --rc genhtml_branch_coverage=1 00:06:58.533 --rc genhtml_function_coverage=1 00:06:58.533 --rc genhtml_legend=1 00:06:58.533 --rc geninfo_all_blocks=1 00:06:58.533 --rc geninfo_unexecuted_blocks=1 00:06:58.533 00:06:58.533 ' 00:06:58.533 06:10:48 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:58.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.533 --rc genhtml_branch_coverage=1 00:06:58.533 --rc genhtml_function_coverage=1 00:06:58.533 --rc genhtml_legend=1 00:06:58.533 --rc geninfo_all_blocks=1 00:06:58.533 --rc geninfo_unexecuted_blocks=1 00:06:58.533 00:06:58.533 ' 00:06:58.533 06:10:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:58.533 06:10:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=943169 00:06:58.533 06:10:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:58.533 06:10:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 943169 00:06:58.533 06:10:48 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 943169 ']' 00:06:58.533 06:10:48 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.533 06:10:48 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.791 06:10:48 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.791 06:10:48 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.791 06:10:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:58.791 [2024-12-08 06:10:48.708776] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:06:58.791 [2024-12-08 06:10:48.708872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid943169 ] 00:06:58.791 [2024-12-08 06:10:48.773148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.791 [2024-12-08 06:10:48.832737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.050 06:10:49 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.050 06:10:49 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:59.050 06:10:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:59.050 06:10:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:59.050 06:10:49 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.050 06:10:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:59.050 { 00:06:59.050 "filename": "/tmp/spdk_mem_dump.txt" 00:06:59.050 } 00:06:59.050 06:10:49 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.050 06:10:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:59.050 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:59.050 1 heaps totaling size 818.000000 MiB 00:06:59.050 size: 818.000000 MiB heap id: 0 00:06:59.050 end heaps---------- 00:06:59.050 9 mempools totaling size 603.782043 MiB 00:06:59.050 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:59.050 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:59.050 size: 100.555481 MiB name: bdev_io_943169 00:06:59.050 size: 50.003479 MiB name: msgpool_943169 00:06:59.050 size: 36.509338 MiB name: fsdev_io_943169 00:06:59.050 size: 21.763794 MiB name: PDU_Pool 00:06:59.050 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:59.050 size: 4.133484 MiB name: evtpool_943169 00:06:59.050 size: 0.026123 MiB name: Session_Pool 00:06:59.050 end mempools------- 00:06:59.050 6 memzones totaling size 4.142822 MiB 00:06:59.050 size: 1.000366 MiB name: RG_ring_0_943169 00:06:59.050 size: 1.000366 MiB name: RG_ring_1_943169 00:06:59.050 size: 1.000366 MiB name: RG_ring_4_943169 00:06:59.050 size: 1.000366 MiB name: RG_ring_5_943169 00:06:59.050 size: 0.125366 MiB name: RG_ring_2_943169 00:06:59.050 size: 0.015991 MiB name: RG_ring_3_943169 00:06:59.050 end memzones------- 00:06:59.050 06:10:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:59.310 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:59.310 list of free elements. size: 10.852478 MiB 00:06:59.310 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:59.310 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:59.310 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:59.310 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:59.311 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:59.311 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:59.311 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:59.311 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:59.311 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:06:59.311 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:59.311 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:59.311 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:59.311 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:59.311 element at address: 0x200028200000 with size: 0.410034 MiB 00:06:59.311 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:59.311 list of standard malloc elements. size: 199.218628 MiB 00:06:59.311 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:59.311 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:59.311 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:59.311 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:59.311 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:59.311 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:59.311 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:59.311 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:59.311 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:59.311 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:59.311 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:59.311 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:59.311 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:59.311 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:59.311 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:59.311 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:59.311 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:59.311 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:59.311 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:59.311 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:59.311 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:59.311 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:59.311 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:59.311 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:59.311 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:59.311 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:59.311 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:59.311 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:59.311 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:59.311 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:59.311 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:59.311 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:59.311 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:59.311 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:59.311 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:59.311 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:59.311 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:59.311 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:59.311 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:59.311 element at address: 0x200028268f80 with size: 0.000183 MiB 00:06:59.311 element at address: 0x200028269040 with size: 0.000183 MiB 00:06:59.311 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:06:59.311 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:59.311 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:59.311 list of memzone associated elements. size: 607.928894 MiB 00:06:59.311 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:59.311 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:59.311 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:59.311 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:59.311 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:59.311 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_943169_0 00:06:59.311 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:59.311 associated memzone info: size: 48.002930 MiB name: MP_msgpool_943169_0 00:06:59.311 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:59.311 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_943169_0 00:06:59.311 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:59.311 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:59.311 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:59.311 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:59.311 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:59.311 associated memzone info: size: 3.000122 MiB name: MP_evtpool_943169_0 00:06:59.311 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:59.311 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_943169 00:06:59.311 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:59.311 associated memzone info: size: 1.007996 MiB name: MP_evtpool_943169 00:06:59.311 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:59.311 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:59.311 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:59.311 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:59.311 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:59.311 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:59.311 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:59.311 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:59.311 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:59.311 associated memzone info: size: 1.000366 MiB name: RG_ring_0_943169 00:06:59.311 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:59.311 associated memzone info: size: 1.000366 MiB name: RG_ring_1_943169 00:06:59.311 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:59.311 associated memzone info: size: 1.000366 MiB name: RG_ring_4_943169 00:06:59.311 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:59.311 associated memzone info: size: 1.000366 MiB name: RG_ring_5_943169 00:06:59.311 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:59.311 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_943169 00:06:59.311 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:59.311 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_943169 00:06:59.311 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:59.311 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:59.311 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:59.311 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:59.311 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:59.311 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:59.311 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:59.311 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_943169 00:06:59.311 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:59.311 associated memzone info: size: 0.125366 MiB name: RG_ring_2_943169 00:06:59.311 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:59.311 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:59.311 element at address: 0x200028269100 with size: 0.023743 MiB 00:06:59.311 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:59.311 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:59.311 associated memzone info: size: 0.015991 MiB name: RG_ring_3_943169 00:06:59.311 element at address: 0x20002826f240 with size: 0.002441 MiB 00:06:59.311 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:59.311 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:59.311 associated memzone info: size: 0.000183 MiB name: MP_msgpool_943169 00:06:59.311 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:59.311 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_943169 00:06:59.311 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:59.311 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_943169 00:06:59.311 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:06:59.311 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:59.311 06:10:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:59.311 06:10:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 943169 00:06:59.311 06:10:49 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 943169 ']' 00:06:59.311 06:10:49 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 943169 00:06:59.311 06:10:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:59.311 06:10:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.311 06:10:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 943169 00:06:59.311 06:10:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.311 06:10:49 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.311 06:10:49 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 943169' 00:06:59.311 killing process with pid 943169 00:06:59.311 06:10:49 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 943169 00:06:59.311 06:10:49 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 943169 00:06:59.571 00:06:59.571 real 0m1.166s 00:06:59.571 user 0m1.137s 00:06:59.571 sys 0m0.421s 00:06:59.571 06:10:49 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.571 06:10:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:59.571 ************************************ 00:06:59.571 END TEST dpdk_mem_utility 00:06:59.571 ************************************ 00:06:59.830 06:10:49 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:59.830 06:10:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.830 06:10:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.830 06:10:49 -- common/autotest_common.sh@10 -- # set +x 00:06:59.830 ************************************ 00:06:59.830 START TEST event 00:06:59.830 ************************************ 00:06:59.830 06:10:49 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:59.830 * Looking for test storage... 00:06:59.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:59.830 06:10:49 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:59.830 06:10:49 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:59.830 06:10:49 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:59.830 06:10:49 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:59.830 06:10:49 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.830 06:10:49 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.830 06:10:49 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.830 06:10:49 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.830 06:10:49 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.830 06:10:49 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.830 06:10:49 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.830 06:10:49 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.830 06:10:49 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.830 06:10:49 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.830 06:10:49 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.830 06:10:49 event -- scripts/common.sh@344 -- # case "$op" in 00:06:59.830 06:10:49 event -- scripts/common.sh@345 -- # : 1 00:06:59.830 06:10:49 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.830 06:10:49 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.830 06:10:49 event -- scripts/common.sh@365 -- # decimal 1 00:06:59.830 06:10:49 event -- scripts/common.sh@353 -- # local d=1 00:06:59.830 06:10:49 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.830 06:10:49 event -- scripts/common.sh@355 -- # echo 1 00:06:59.830 06:10:49 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.830 06:10:49 event -- scripts/common.sh@366 -- # decimal 2 00:06:59.830 06:10:49 event -- scripts/common.sh@353 -- # local d=2 00:06:59.830 06:10:49 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.830 06:10:49 event -- scripts/common.sh@355 -- # echo 2 00:06:59.830 06:10:49 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.830 06:10:49 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.830 06:10:49 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.830 06:10:49 event -- scripts/common.sh@368 -- # return 0 00:06:59.830 06:10:49 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.830 06:10:49 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:59.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.830 --rc genhtml_branch_coverage=1 00:06:59.830 --rc genhtml_function_coverage=1 00:06:59.830 --rc genhtml_legend=1 00:06:59.830 --rc geninfo_all_blocks=1 00:06:59.830 --rc geninfo_unexecuted_blocks=1 00:06:59.830 00:06:59.830 ' 00:06:59.830 06:10:49 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:59.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.830 --rc genhtml_branch_coverage=1 00:06:59.830 --rc genhtml_function_coverage=1 00:06:59.830 --rc genhtml_legend=1 00:06:59.830 --rc geninfo_all_blocks=1 00:06:59.830 --rc geninfo_unexecuted_blocks=1 00:06:59.830 00:06:59.830 ' 00:06:59.830 06:10:49 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:59.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.830 --rc genhtml_branch_coverage=1 00:06:59.830 --rc genhtml_function_coverage=1 00:06:59.830 --rc genhtml_legend=1 00:06:59.830 --rc geninfo_all_blocks=1 00:06:59.830 --rc geninfo_unexecuted_blocks=1 00:06:59.830 00:06:59.830 ' 00:06:59.830 06:10:49 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:59.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.830 --rc genhtml_branch_coverage=1 00:06:59.830 --rc genhtml_function_coverage=1 00:06:59.830 --rc genhtml_legend=1 00:06:59.830 --rc geninfo_all_blocks=1 00:06:59.830 --rc geninfo_unexecuted_blocks=1 00:06:59.830 00:06:59.830 ' 00:06:59.830 06:10:49 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:59.830 06:10:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:59.830 06:10:49 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:59.830 06:10:49 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:59.830 06:10:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.830 06:10:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.830 ************************************ 00:06:59.830 START TEST event_perf 00:06:59.830 ************************************ 00:06:59.830 06:10:49 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:59.830 Running I/O for 1 seconds...[2024-12-08 06:10:49.904763] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:06:59.830 [2024-12-08 06:10:49.904821] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid943418 ] 00:07:00.091 [2024-12-08 06:10:49.972537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.091 [2024-12-08 06:10:50.038149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.091 [2024-12-08 06:10:50.038212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.091 [2024-12-08 06:10:50.038276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.091 [2024-12-08 06:10:50.038279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.050 Running I/O for 1 seconds... 00:07:01.050 lcore 0: 232224 00:07:01.050 lcore 1: 232225 00:07:01.050 lcore 2: 232225 00:07:01.050 lcore 3: 232224 00:07:01.050 done. 00:07:01.050 00:07:01.050 real 0m1.211s 00:07:01.050 user 0m4.139s 00:07:01.050 sys 0m0.065s 00:07:01.050 06:10:51 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.050 06:10:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.050 ************************************ 00:07:01.050 END TEST event_perf 00:07:01.050 ************************************ 00:07:01.050 06:10:51 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:01.050 06:10:51 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:01.050 06:10:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.050 06:10:51 event -- common/autotest_common.sh@10 -- # set +x 00:07:01.050 ************************************ 00:07:01.050 START TEST event_reactor 00:07:01.050 ************************************ 00:07:01.050 06:10:51 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:01.050 [2024-12-08 06:10:51.163330] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:01.050 [2024-12-08 06:10:51.163397] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid943583 ] 00:07:01.310 [2024-12-08 06:10:51.229062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.310 [2024-12-08 06:10:51.284788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.249 test_start 00:07:02.249 oneshot 00:07:02.249 tick 100 00:07:02.249 tick 100 00:07:02.249 tick 250 00:07:02.249 tick 100 00:07:02.249 tick 100 00:07:02.249 tick 100 00:07:02.249 tick 250 00:07:02.249 tick 500 00:07:02.249 tick 100 00:07:02.249 tick 100 00:07:02.249 tick 250 00:07:02.249 tick 100 00:07:02.249 tick 100 00:07:02.249 test_end 00:07:02.249 00:07:02.249 real 0m1.199s 00:07:02.249 user 0m1.133s 00:07:02.249 sys 0m0.062s 00:07:02.249 06:10:52 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.249 06:10:52 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:02.249 ************************************ 00:07:02.249 END TEST event_reactor 00:07:02.249 ************************************ 00:07:02.509 06:10:52 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:02.509 06:10:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:02.509 06:10:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.509 06:10:52 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.509 ************************************ 00:07:02.509 START TEST event_reactor_perf 00:07:02.509 ************************************ 00:07:02.509 06:10:52 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:02.509 [2024-12-08 06:10:52.411986] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:02.509 [2024-12-08 06:10:52.412053] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid943738 ] 00:07:02.509 [2024-12-08 06:10:52.476778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.509 [2024-12-08 06:10:52.532630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.887 test_start 00:07:03.887 test_end 00:07:03.887 Performance: 446015 events per second 00:07:03.887 00:07:03.887 real 0m1.198s 00:07:03.887 user 0m1.123s 00:07:03.887 sys 0m0.071s 00:07:03.887 06:10:53 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.887 06:10:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.887 ************************************ 00:07:03.887 END TEST event_reactor_perf 00:07:03.887 ************************************ 00:07:03.887 06:10:53 event -- event/event.sh@49 -- # uname -s 00:07:03.887 06:10:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:03.887 06:10:53 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:03.887 06:10:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.887 06:10:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.887 06:10:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.887 ************************************ 00:07:03.887 START TEST event_scheduler 00:07:03.887 ************************************ 00:07:03.887 06:10:53 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:03.887 * Looking for test storage... 00:07:03.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:03.887 06:10:53 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:03.887 06:10:53 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:07:03.887 06:10:53 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:03.887 06:10:53 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.887 06:10:53 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:03.887 06:10:53 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.887 06:10:53 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:03.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.888 --rc genhtml_branch_coverage=1 00:07:03.888 --rc genhtml_function_coverage=1 00:07:03.888 --rc genhtml_legend=1 00:07:03.888 --rc geninfo_all_blocks=1 00:07:03.888 --rc geninfo_unexecuted_blocks=1 00:07:03.888 00:07:03.888 ' 00:07:03.888 06:10:53 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:03.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.888 --rc genhtml_branch_coverage=1 00:07:03.888 --rc genhtml_function_coverage=1 00:07:03.888 --rc genhtml_legend=1 00:07:03.888 --rc geninfo_all_blocks=1 00:07:03.888 --rc geninfo_unexecuted_blocks=1 00:07:03.888 00:07:03.888 ' 00:07:03.888 06:10:53 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:03.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.888 --rc genhtml_branch_coverage=1 00:07:03.888 --rc genhtml_function_coverage=1 00:07:03.888 --rc genhtml_legend=1 00:07:03.888 --rc geninfo_all_blocks=1 00:07:03.888 --rc geninfo_unexecuted_blocks=1 00:07:03.888 00:07:03.888 ' 00:07:03.888 06:10:53 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:03.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.888 --rc genhtml_branch_coverage=1 00:07:03.888 --rc genhtml_function_coverage=1 00:07:03.888 --rc genhtml_legend=1 00:07:03.888 --rc geninfo_all_blocks=1 00:07:03.888 --rc geninfo_unexecuted_blocks=1 00:07:03.888 00:07:03.888 ' 00:07:03.888 06:10:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:03.888 06:10:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=943924 00:07:03.888 06:10:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:03.888 06:10:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:03.888 06:10:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 943924 00:07:03.888 06:10:53 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 943924 ']' 00:07:03.888 06:10:53 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.888 06:10:53 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.888 06:10:53 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.888 06:10:53 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.888 06:10:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:03.888 [2024-12-08 06:10:53.837794] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:03.888 [2024-12-08 06:10:53.837871] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid943924 ] 00:07:03.888 [2024-12-08 06:10:53.904074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.888 [2024-12-08 06:10:53.964142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.888 [2024-12-08 06:10:53.964205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.888 [2024-12-08 06:10:53.964273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.888 [2024-12-08 06:10:53.964277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.146 06:10:54 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.146 06:10:54 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:04.146 06:10:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:04.146 06:10:54 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.146 06:10:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:04.146 [2024-12-08 06:10:54.069205] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:04.146 [2024-12-08 06:10:54.069233] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:04.146 [2024-12-08 06:10:54.069265] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:04.146 [2024-12-08 06:10:54.069277] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:04.146 [2024-12-08 06:10:54.069287] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:04.146 06:10:54 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.146 06:10:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:04.146 06:10:54 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.146 06:10:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:04.146 [2024-12-08 06:10:54.171003] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:04.146 06:10:54 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.146 06:10:54 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:04.146 06:10:54 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.146 06:10:54 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.146 06:10:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:04.146 ************************************ 00:07:04.146 START TEST scheduler_create_thread 00:07:04.146 ************************************ 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.146 2 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.146 3 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.146 4 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.146 5 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.146 6 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.146 7 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.146 8 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.146 9 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.146 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.406 10 00:07:04.406 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.406 06:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:04.406 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.406 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.406 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.406 06:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:04.406 06:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:04.406 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.406 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.406 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.406 06:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:04.406 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.406 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.406 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.406 06:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:04.406 06:10:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:04.406 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.406 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.975 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.975 00:07:04.975 real 0m0.590s 00:07:04.975 user 0m0.010s 00:07:04.975 sys 0m0.003s 00:07:04.975 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.975 06:10:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.975 ************************************ 00:07:04.975 END TEST scheduler_create_thread 00:07:04.975 ************************************ 00:07:04.975 06:10:54 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:04.975 06:10:54 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 943924 00:07:04.975 06:10:54 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 943924 ']' 00:07:04.975 06:10:54 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 943924 00:07:04.975 06:10:54 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:04.975 06:10:54 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.975 06:10:54 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 943924 00:07:04.975 06:10:54 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:04.975 06:10:54 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:04.976 06:10:54 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 943924' 00:07:04.976 killing process with pid 943924 00:07:04.976 06:10:54 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 943924 00:07:04.976 06:10:54 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 943924 00:07:05.234 [2024-12-08 06:10:55.266998] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:05.491 00:07:05.491 real 0m1.829s 00:07:05.491 user 0m2.485s 00:07:05.491 sys 0m0.322s 00:07:05.491 06:10:55 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.491 06:10:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.491 ************************************ 00:07:05.491 END TEST event_scheduler 00:07:05.491 ************************************ 00:07:05.491 06:10:55 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:05.491 06:10:55 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:05.491 06:10:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.491 06:10:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.491 06:10:55 event -- common/autotest_common.sh@10 -- # set +x 00:07:05.491 ************************************ 00:07:05.491 START TEST app_repeat 00:07:05.491 ************************************ 00:07:05.491 06:10:55 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:05.491 06:10:55 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.491 06:10:55 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.491 06:10:55 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:05.491 06:10:55 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.491 06:10:55 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:05.491 06:10:55 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:05.491 06:10:55 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:05.491 06:10:55 event.app_repeat -- event/event.sh@19 -- # repeat_pid=944235 00:07:05.491 06:10:55 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:05.491 06:10:55 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:05.491 06:10:55 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 944235' 00:07:05.491 Process app_repeat pid: 944235 00:07:05.491 06:10:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:05.491 06:10:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:05.491 spdk_app_start Round 0 00:07:05.491 06:10:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 944235 /var/tmp/spdk-nbd.sock 00:07:05.491 06:10:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 944235 ']' 00:07:05.491 06:10:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:05.491 06:10:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.491 06:10:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:05.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:05.491 06:10:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.491 06:10:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:05.491 [2024-12-08 06:10:55.558806] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:05.491 [2024-12-08 06:10:55.558871] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944235 ] 00:07:05.748 [2024-12-08 06:10:55.624194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.748 [2024-12-08 06:10:55.681171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.748 [2024-12-08 06:10:55.681175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.748 06:10:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.748 06:10:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:05.748 06:10:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.005 Malloc0 00:07:06.005 06:10:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.263 Malloc1 00:07:06.522 06:10:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:06.523 06:10:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.523 06:10:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:06.523 06:10:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:06.523 06:10:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.523 06:10:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:06.523 06:10:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:06.523 06:10:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.523 06:10:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:06.523 06:10:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:06.523 06:10:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.523 06:10:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:06.523 06:10:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:06.523 06:10:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:06.523 06:10:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.523 06:10:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:06.781 /dev/nbd0 00:07:06.781 06:10:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:06.782 06:10:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:06.782 06:10:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:06.782 06:10:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:06.782 06:10:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.782 06:10:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.782 06:10:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:06.782 06:10:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:06.782 06:10:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.782 06:10:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.782 06:10:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:06.782 1+0 records in 00:07:06.782 1+0 records out 00:07:06.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246993 s, 16.6 MB/s 00:07:06.782 06:10:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:06.782 06:10:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:06.782 06:10:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:06.782 06:10:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.782 06:10:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:06.782 06:10:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.782 06:10:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.782 06:10:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:07.041 /dev/nbd1 00:07:07.041 06:10:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:07.041 06:10:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:07.041 06:10:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:07.041 06:10:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:07.041 06:10:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.041 06:10:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.041 06:10:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:07.041 06:10:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:07.041 06:10:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.041 06:10:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.041 06:10:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.041 1+0 records in 00:07:07.041 1+0 records out 00:07:07.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192685 s, 21.3 MB/s 00:07:07.041 06:10:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:07.041 06:10:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:07.041 06:10:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:07.041 06:10:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.041 06:10:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:07.041 06:10:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.041 06:10:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.041 06:10:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.041 06:10:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.041 06:10:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.299 06:10:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:07.299 { 00:07:07.299 "nbd_device": "/dev/nbd0", 00:07:07.299 "bdev_name": "Malloc0" 00:07:07.299 }, 00:07:07.299 { 00:07:07.299 "nbd_device": "/dev/nbd1", 00:07:07.299 "bdev_name": "Malloc1" 00:07:07.299 } 00:07:07.299 ]' 00:07:07.299 06:10:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:07.299 { 00:07:07.300 "nbd_device": "/dev/nbd0", 00:07:07.300 "bdev_name": "Malloc0" 00:07:07.300 }, 00:07:07.300 { 00:07:07.300 "nbd_device": "/dev/nbd1", 00:07:07.300 "bdev_name": "Malloc1" 00:07:07.300 } 00:07:07.300 ]' 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:07.300 /dev/nbd1' 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:07.300 /dev/nbd1' 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:07.300 256+0 records in 00:07:07.300 256+0 records out 00:07:07.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509457 s, 206 MB/s 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:07.300 256+0 records in 00:07:07.300 256+0 records out 00:07:07.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203748 s, 51.5 MB/s 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:07.300 256+0 records in 00:07:07.300 256+0 records out 00:07:07.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220997 s, 47.4 MB/s 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.300 06:10:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:07.558 06:10:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.558 06:10:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:07.558 06:10:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:07.558 06:10:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:07.558 06:10:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.558 06:10:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.558 06:10:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:07.558 06:10:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:07.558 06:10:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.558 06:10:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:07.817 06:10:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:07.817 06:10:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:07.817 06:10:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:07.817 06:10:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.817 06:10:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.817 06:10:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:07.817 06:10:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:07.817 06:10:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.817 06:10:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.817 06:10:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:08.076 06:10:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:08.076 06:10:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:08.076 06:10:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:08.076 06:10:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.076 06:10:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.076 06:10:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:08.076 06:10:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:08.076 06:10:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.076 06:10:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.076 06:10:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.076 06:10:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.334 06:10:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.334 06:10:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.334 06:10:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.334 06:10:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.334 06:10:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.334 06:10:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.334 06:10:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:08.334 06:10:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.334 06:10:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.334 06:10:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:08.334 06:10:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:08.334 06:10:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:08.334 06:10:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:08.593 06:10:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:08.853 [2024-12-08 06:10:58.840367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:08.853 [2024-12-08 06:10:58.895148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.853 [2024-12-08 06:10:58.895152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.853 [2024-12-08 06:10:58.953429] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:08.853 [2024-12-08 06:10:58.953516] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:12.146 06:11:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:12.146 06:11:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:12.146 spdk_app_start Round 1 00:07:12.146 06:11:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 944235 /var/tmp/spdk-nbd.sock 00:07:12.146 06:11:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 944235 ']' 00:07:12.146 06:11:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:12.146 06:11:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.146 06:11:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:12.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:12.146 06:11:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.146 06:11:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:12.146 06:11:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.146 06:11:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:12.146 06:11:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.146 Malloc0 00:07:12.146 06:11:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.403 Malloc1 00:07:12.403 06:11:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:12.403 06:11:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.403 06:11:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:12.403 06:11:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:12.403 06:11:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.403 06:11:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:12.403 06:11:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:12.403 06:11:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.403 06:11:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:12.403 06:11:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:12.403 06:11:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.403 06:11:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:12.403 06:11:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:12.403 06:11:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:12.403 06:11:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.403 06:11:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:12.709 /dev/nbd0 00:07:12.981 06:11:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:12.981 06:11:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:12.981 06:11:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:12.981 06:11:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:12.981 06:11:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.981 06:11:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.981 06:11:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:12.981 06:11:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:12.981 06:11:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.981 06:11:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.981 06:11:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.981 1+0 records in 00:07:12.981 1+0 records out 00:07:12.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267265 s, 15.3 MB/s 00:07:12.981 06:11:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:12.981 06:11:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:12.981 06:11:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:12.981 06:11:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.981 06:11:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:12.981 06:11:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.981 06:11:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.981 06:11:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:13.250 /dev/nbd1 00:07:13.250 06:11:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:13.250 06:11:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:13.250 06:11:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:13.250 06:11:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:13.250 06:11:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.250 06:11:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.250 06:11:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:13.250 06:11:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:13.250 06:11:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.250 06:11:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.250 06:11:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:13.250 1+0 records in 00:07:13.250 1+0 records out 00:07:13.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239167 s, 17.1 MB/s 00:07:13.250 06:11:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:13.250 06:11:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:13.250 06:11:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:13.250 06:11:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.250 06:11:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:13.250 06:11:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.250 06:11:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.250 06:11:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.250 06:11:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.250 06:11:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.507 06:11:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:13.507 { 00:07:13.507 "nbd_device": "/dev/nbd0", 00:07:13.507 "bdev_name": "Malloc0" 00:07:13.507 }, 00:07:13.507 { 00:07:13.507 "nbd_device": "/dev/nbd1", 00:07:13.507 "bdev_name": "Malloc1" 00:07:13.507 } 00:07:13.507 ]' 00:07:13.507 06:11:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:13.507 { 00:07:13.507 "nbd_device": "/dev/nbd0", 00:07:13.507 "bdev_name": "Malloc0" 00:07:13.507 }, 00:07:13.507 { 00:07:13.507 "nbd_device": "/dev/nbd1", 00:07:13.507 "bdev_name": "Malloc1" 00:07:13.507 } 00:07:13.507 ]' 00:07:13.507 06:11:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.507 06:11:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:13.507 /dev/nbd1' 00:07:13.507 06:11:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:13.507 /dev/nbd1' 00:07:13.507 06:11:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.507 06:11:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:13.507 06:11:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:13.507 06:11:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:13.507 06:11:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:13.507 06:11:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:13.507 06:11:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.507 06:11:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.507 06:11:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:13.507 06:11:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:13.507 06:11:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:13.507 06:11:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:13.507 256+0 records in 00:07:13.507 256+0 records out 00:07:13.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465109 s, 225 MB/s 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:13.508 256+0 records in 00:07:13.508 256+0 records out 00:07:13.508 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200344 s, 52.3 MB/s 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:13.508 256+0 records in 00:07:13.508 256+0 records out 00:07:13.508 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214608 s, 48.9 MB/s 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.508 06:11:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:13.766 06:11:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.766 06:11:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.766 06:11:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.766 06:11:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.766 06:11:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.766 06:11:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.766 06:11:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.766 06:11:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.766 06:11:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.766 06:11:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:14.024 06:11:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.024 06:11:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.024 06:11:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.024 06:11:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.024 06:11:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.024 06:11:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.024 06:11:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:14.024 06:11:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.024 06:11:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.024 06:11:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.024 06:11:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.589 06:11:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:14.589 06:11:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:14.589 06:11:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.589 06:11:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:14.589 06:11:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:14.589 06:11:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.589 06:11:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:14.589 06:11:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:14.589 06:11:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:14.589 06:11:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:14.589 06:11:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:14.589 06:11:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:14.589 06:11:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:14.848 06:11:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:14.848 [2024-12-08 06:11:04.962323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:15.107 [2024-12-08 06:11:05.016642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.107 [2024-12-08 06:11:05.016642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.107 [2024-12-08 06:11:05.076101] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:15.107 [2024-12-08 06:11:05.076202] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:17.655 06:11:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:17.655 06:11:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:17.655 spdk_app_start Round 2 00:07:17.655 06:11:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 944235 /var/tmp/spdk-nbd.sock 00:07:17.655 06:11:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 944235 ']' 00:07:17.655 06:11:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:17.655 06:11:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.655 06:11:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:17.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:17.655 06:11:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.655 06:11:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:17.914 06:11:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.914 06:11:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:18.172 06:11:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:18.431 Malloc0 00:07:18.431 06:11:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:18.742 Malloc1 00:07:18.742 06:11:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:18.742 06:11:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.742 06:11:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:18.742 06:11:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:18.742 06:11:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.742 06:11:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:18.742 06:11:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:18.742 06:11:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.742 06:11:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:18.742 06:11:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:18.742 06:11:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.742 06:11:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:18.742 06:11:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:18.742 06:11:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:18.742 06:11:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:18.742 06:11:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:18.999 /dev/nbd0 00:07:18.999 06:11:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:18.999 06:11:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:18.999 06:11:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:18.999 06:11:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:18.999 06:11:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.999 06:11:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.999 06:11:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:18.999 06:11:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:18.999 06:11:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.999 06:11:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.999 06:11:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:18.999 1+0 records in 00:07:18.999 1+0 records out 00:07:18.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215839 s, 19.0 MB/s 00:07:18.999 06:11:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:18.999 06:11:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:18.999 06:11:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:18.999 06:11:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.999 06:11:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:18.999 06:11:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.999 06:11:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:18.999 06:11:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:19.256 /dev/nbd1 00:07:19.256 06:11:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:19.256 06:11:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:19.256 06:11:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:19.256 06:11:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:19.256 06:11:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.256 06:11:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.256 06:11:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:19.256 06:11:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:19.256 06:11:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.256 06:11:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.256 06:11:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:19.256 1+0 records in 00:07:19.256 1+0 records out 00:07:19.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195215 s, 21.0 MB/s 00:07:19.256 06:11:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:19.256 06:11:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:19.256 06:11:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:19.256 06:11:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.256 06:11:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:19.256 06:11:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.256 06:11:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.256 06:11:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.256 06:11:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.256 06:11:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:19.514 { 00:07:19.514 "nbd_device": "/dev/nbd0", 00:07:19.514 "bdev_name": "Malloc0" 00:07:19.514 }, 00:07:19.514 { 00:07:19.514 "nbd_device": "/dev/nbd1", 00:07:19.514 "bdev_name": "Malloc1" 00:07:19.514 } 00:07:19.514 ]' 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:19.514 { 00:07:19.514 "nbd_device": "/dev/nbd0", 00:07:19.514 "bdev_name": "Malloc0" 00:07:19.514 }, 00:07:19.514 { 00:07:19.514 "nbd_device": "/dev/nbd1", 00:07:19.514 "bdev_name": "Malloc1" 00:07:19.514 } 00:07:19.514 ]' 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:19.514 /dev/nbd1' 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:19.514 /dev/nbd1' 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:19.514 256+0 records in 00:07:19.514 256+0 records out 00:07:19.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515413 s, 203 MB/s 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:19.514 256+0 records in 00:07:19.514 256+0 records out 00:07:19.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203486 s, 51.5 MB/s 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:19.514 256+0 records in 00:07:19.514 256+0 records out 00:07:19.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222408 s, 47.1 MB/s 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.514 06:11:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:20.079 06:11:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.079 06:11:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.079 06:11:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.079 06:11:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.079 06:11:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.079 06:11:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.079 06:11:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:20.079 06:11:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.079 06:11:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.079 06:11:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:20.337 06:11:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:20.337 06:11:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:20.337 06:11:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:20.337 06:11:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.337 06:11:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.337 06:11:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:20.337 06:11:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:20.337 06:11:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.337 06:11:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.337 06:11:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.337 06:11:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.594 06:11:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:20.594 06:11:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:20.594 06:11:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.594 06:11:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:20.594 06:11:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:20.594 06:11:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.594 06:11:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:20.594 06:11:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:20.594 06:11:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:20.594 06:11:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:20.594 06:11:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:20.594 06:11:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:20.594 06:11:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:20.853 06:11:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:21.113 [2024-12-08 06:11:11.067696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.113 [2024-12-08 06:11:11.122338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.113 [2024-12-08 06:11:11.122343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.113 [2024-12-08 06:11:11.178944] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:21.113 [2024-12-08 06:11:11.179046] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:24.405 06:11:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 944235 /var/tmp/spdk-nbd.sock 00:07:24.405 06:11:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 944235 ']' 00:07:24.405 06:11:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:24.405 06:11:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.405 06:11:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:24.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:24.405 06:11:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.405 06:11:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:24.405 06:11:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.405 06:11:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:24.405 06:11:14 event.app_repeat -- event/event.sh@39 -- # killprocess 944235 00:07:24.405 06:11:14 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 944235 ']' 00:07:24.405 06:11:14 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 944235 00:07:24.405 06:11:14 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:24.405 06:11:14 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.405 06:11:14 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 944235 00:07:24.405 06:11:14 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.405 06:11:14 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.405 06:11:14 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 944235' 00:07:24.405 killing process with pid 944235 00:07:24.405 06:11:14 event.app_repeat -- common/autotest_common.sh@973 -- # kill 944235 00:07:24.405 06:11:14 event.app_repeat -- common/autotest_common.sh@978 -- # wait 944235 00:07:24.405 spdk_app_start is called in Round 0. 00:07:24.405 Shutdown signal received, stop current app iteration 00:07:24.405 Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 reinitialization... 00:07:24.405 spdk_app_start is called in Round 1. 00:07:24.405 Shutdown signal received, stop current app iteration 00:07:24.405 Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 reinitialization... 00:07:24.405 spdk_app_start is called in Round 2. 00:07:24.405 Shutdown signal received, stop current app iteration 00:07:24.405 Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 reinitialization... 00:07:24.405 spdk_app_start is called in Round 3. 00:07:24.405 Shutdown signal received, stop current app iteration 00:07:24.405 06:11:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:24.405 06:11:14 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:24.405 00:07:24.405 real 0m18.836s 00:07:24.405 user 0m41.649s 00:07:24.405 sys 0m3.195s 00:07:24.405 06:11:14 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.405 06:11:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:24.405 ************************************ 00:07:24.405 END TEST app_repeat 00:07:24.405 ************************************ 00:07:24.405 06:11:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:24.405 06:11:14 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:24.405 06:11:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.405 06:11:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.405 06:11:14 event -- common/autotest_common.sh@10 -- # set +x 00:07:24.405 ************************************ 00:07:24.405 START TEST cpu_locks 00:07:24.405 ************************************ 00:07:24.405 06:11:14 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:24.405 * Looking for test storage... 00:07:24.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:24.405 06:11:14 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:24.405 06:11:14 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:07:24.405 06:11:14 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:24.665 06:11:14 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.665 06:11:14 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:24.665 06:11:14 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.665 06:11:14 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:24.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.665 --rc genhtml_branch_coverage=1 00:07:24.665 --rc genhtml_function_coverage=1 00:07:24.665 --rc genhtml_legend=1 00:07:24.665 --rc geninfo_all_blocks=1 00:07:24.665 --rc geninfo_unexecuted_blocks=1 00:07:24.665 00:07:24.665 ' 00:07:24.665 06:11:14 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:24.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.665 --rc genhtml_branch_coverage=1 00:07:24.665 --rc genhtml_function_coverage=1 00:07:24.665 --rc genhtml_legend=1 00:07:24.665 --rc geninfo_all_blocks=1 00:07:24.665 --rc geninfo_unexecuted_blocks=1 00:07:24.665 00:07:24.665 ' 00:07:24.665 06:11:14 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:24.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.665 --rc genhtml_branch_coverage=1 00:07:24.665 --rc genhtml_function_coverage=1 00:07:24.665 --rc genhtml_legend=1 00:07:24.665 --rc geninfo_all_blocks=1 00:07:24.665 --rc geninfo_unexecuted_blocks=1 00:07:24.665 00:07:24.665 ' 00:07:24.665 06:11:14 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:24.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.665 --rc genhtml_branch_coverage=1 00:07:24.665 --rc genhtml_function_coverage=1 00:07:24.665 --rc genhtml_legend=1 00:07:24.665 --rc geninfo_all_blocks=1 00:07:24.665 --rc geninfo_unexecuted_blocks=1 00:07:24.665 00:07:24.665 ' 00:07:24.665 06:11:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:24.665 06:11:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:24.665 06:11:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:24.665 06:11:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:24.665 06:11:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.665 06:11:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.665 06:11:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.665 ************************************ 00:07:24.665 START TEST default_locks 00:07:24.665 ************************************ 00:07:24.665 06:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:24.665 06:11:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=946728 00:07:24.665 06:11:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:24.665 06:11:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 946728 00:07:24.665 06:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 946728 ']' 00:07:24.665 06:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.665 06:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.665 06:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.666 06:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.666 06:11:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.666 [2024-12-08 06:11:14.651663] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:24.666 [2024-12-08 06:11:14.651776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid946728 ] 00:07:24.666 [2024-12-08 06:11:14.717916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.666 [2024-12-08 06:11:14.777996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.925 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.183 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:25.183 06:11:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 946728 00:07:25.183 06:11:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 946728 00:07:25.183 06:11:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:25.183 lslocks: write error 00:07:25.183 06:11:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 946728 00:07:25.183 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 946728 ']' 00:07:25.183 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 946728 00:07:25.183 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:25.183 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.183 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 946728 00:07:25.183 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.183 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.183 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 946728' 00:07:25.183 killing process with pid 946728 00:07:25.183 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 946728 00:07:25.184 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 946728 00:07:25.750 06:11:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 946728 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 946728 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 946728 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 946728 ']' 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (946728) - No such process 00:07:25.751 ERROR: process (pid: 946728) is no longer running 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:25.751 00:07:25.751 real 0m1.128s 00:07:25.751 user 0m1.098s 00:07:25.751 sys 0m0.497s 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.751 06:11:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.751 ************************************ 00:07:25.751 END TEST default_locks 00:07:25.751 ************************************ 00:07:25.751 06:11:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:25.751 06:11:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.751 06:11:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.751 06:11:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.751 ************************************ 00:07:25.751 START TEST default_locks_via_rpc 00:07:25.751 ************************************ 00:07:25.751 06:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:25.751 06:11:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=946890 00:07:25.751 06:11:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:25.751 06:11:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 946890 00:07:25.751 06:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 946890 ']' 00:07:25.751 06:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.751 06:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.751 06:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.751 06:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.751 06:11:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.751 [2024-12-08 06:11:15.836556] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:25.751 [2024-12-08 06:11:15.836669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid946890 ] 00:07:26.009 [2024-12-08 06:11:15.903515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.009 [2024-12-08 06:11:15.963338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.268 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.268 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:26.268 06:11:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:26.268 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.268 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.268 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.268 06:11:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:26.268 06:11:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:26.268 06:11:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:26.268 06:11:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:26.268 06:11:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:26.268 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.268 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.268 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.268 06:11:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 946890 00:07:26.268 06:11:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 946890 00:07:26.268 06:11:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:26.527 06:11:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 946890 00:07:26.527 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 946890 ']' 00:07:26.527 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 946890 00:07:26.527 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:26.527 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.527 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 946890 00:07:26.527 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.527 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.527 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 946890' 00:07:26.527 killing process with pid 946890 00:07:26.527 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 946890 00:07:26.527 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 946890 00:07:27.096 00:07:27.096 real 0m1.142s 00:07:27.096 user 0m1.106s 00:07:27.096 sys 0m0.503s 00:07:27.096 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.096 06:11:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.096 ************************************ 00:07:27.096 END TEST default_locks_via_rpc 00:07:27.096 ************************************ 00:07:27.096 06:11:16 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:27.096 06:11:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.096 06:11:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.096 06:11:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.096 ************************************ 00:07:27.096 START TEST non_locking_app_on_locked_coremask 00:07:27.096 ************************************ 00:07:27.096 06:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:27.097 06:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=947052 00:07:27.097 06:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:27.097 06:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 947052 /var/tmp/spdk.sock 00:07:27.097 06:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 947052 ']' 00:07:27.097 06:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.097 06:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.097 06:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.097 06:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.097 06:11:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.097 [2024-12-08 06:11:17.024747] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:27.097 [2024-12-08 06:11:17.024866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947052 ] 00:07:27.097 [2024-12-08 06:11:17.091471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.097 [2024-12-08 06:11:17.151174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.355 06:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.355 06:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:27.355 06:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=947071 00:07:27.355 06:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:27.355 06:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 947071 /var/tmp/spdk2.sock 00:07:27.355 06:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 947071 ']' 00:07:27.355 06:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.355 06:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.355 06:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.355 06:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.355 06:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.355 [2024-12-08 06:11:17.470905] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:27.355 [2024-12-08 06:11:17.470990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947071 ] 00:07:27.615 [2024-12-08 06:11:17.568499] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:27.615 [2024-12-08 06:11:17.568525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.615 [2024-12-08 06:11:17.680370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.550 06:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.550 06:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:28.550 06:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 947052 00:07:28.550 06:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 947052 00:07:28.550 06:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:29.116 lslocks: write error 00:07:29.116 06:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 947052 00:07:29.116 06:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 947052 ']' 00:07:29.116 06:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 947052 00:07:29.116 06:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:29.116 06:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.116 06:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 947052 00:07:29.116 06:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.116 06:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.116 06:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 947052' 00:07:29.116 killing process with pid 947052 00:07:29.116 06:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 947052 00:07:29.116 06:11:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 947052 00:07:29.681 06:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 947071 00:07:29.681 06:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 947071 ']' 00:07:29.681 06:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 947071 00:07:29.681 06:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:29.681 06:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.681 06:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 947071 00:07:29.940 06:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.940 06:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.940 06:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 947071' 00:07:29.940 killing process with pid 947071 00:07:29.940 06:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 947071 00:07:29.940 06:11:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 947071 00:07:30.198 00:07:30.198 real 0m3.283s 00:07:30.198 user 0m3.488s 00:07:30.198 sys 0m1.058s 00:07:30.198 06:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.198 06:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.198 ************************************ 00:07:30.198 END TEST non_locking_app_on_locked_coremask 00:07:30.198 ************************************ 00:07:30.198 06:11:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:30.198 06:11:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.198 06:11:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.198 06:11:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.198 ************************************ 00:07:30.198 START TEST locking_app_on_unlocked_coremask 00:07:30.198 ************************************ 00:07:30.198 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:30.198 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=947486 00:07:30.198 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:30.198 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 947486 /var/tmp/spdk.sock 00:07:30.198 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 947486 ']' 00:07:30.198 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.198 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.198 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.198 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.198 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.457 [2024-12-08 06:11:20.360454] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:30.457 [2024-12-08 06:11:20.360548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947486 ] 00:07:30.457 [2024-12-08 06:11:20.429341] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:30.457 [2024-12-08 06:11:20.429371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.457 [2024-12-08 06:11:20.484798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.716 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.716 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:30.716 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=947502 00:07:30.716 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:30.716 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 947502 /var/tmp/spdk2.sock 00:07:30.716 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 947502 ']' 00:07:30.716 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:30.716 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.716 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:30.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:30.716 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.716 06:11:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.716 [2024-12-08 06:11:20.811448] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:30.716 [2024-12-08 06:11:20.811527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947502 ] 00:07:30.978 [2024-12-08 06:11:20.910962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.978 [2024-12-08 06:11:21.022775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.915 06:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.915 06:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:31.915 06:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 947502 00:07:31.915 06:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 947502 00:07:31.915 06:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:32.173 lslocks: write error 00:07:32.173 06:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 947486 00:07:32.173 06:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 947486 ']' 00:07:32.173 06:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 947486 00:07:32.173 06:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:32.173 06:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.173 06:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 947486 00:07:32.433 06:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.433 06:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.433 06:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 947486' 00:07:32.433 killing process with pid 947486 00:07:32.433 06:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 947486 00:07:32.433 06:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 947486 00:07:32.998 06:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 947502 00:07:32.998 06:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 947502 ']' 00:07:32.998 06:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 947502 00:07:32.998 06:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:32.999 06:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.999 06:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 947502 00:07:33.257 06:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.257 06:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.257 06:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 947502' 00:07:33.257 killing process with pid 947502 00:07:33.257 06:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 947502 00:07:33.257 06:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 947502 00:07:33.516 00:07:33.516 real 0m3.236s 00:07:33.516 user 0m3.479s 00:07:33.516 sys 0m1.037s 00:07:33.516 06:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.516 06:11:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.516 ************************************ 00:07:33.516 END TEST locking_app_on_unlocked_coremask 00:07:33.516 ************************************ 00:07:33.516 06:11:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:33.516 06:11:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.516 06:11:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.516 06:11:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:33.516 ************************************ 00:07:33.516 START TEST locking_app_on_locked_coremask 00:07:33.516 ************************************ 00:07:33.516 06:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:33.516 06:11:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=947893 00:07:33.516 06:11:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:33.516 06:11:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 947893 /var/tmp/spdk.sock 00:07:33.516 06:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 947893 ']' 00:07:33.516 06:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.516 06:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.516 06:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.516 06:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.516 06:11:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.775 [2024-12-08 06:11:23.645637] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:33.775 [2024-12-08 06:11:23.645728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947893 ] 00:07:33.775 [2024-12-08 06:11:23.710144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.775 [2024-12-08 06:11:23.763551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.032 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.032 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:34.032 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=947934 00:07:34.032 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:34.032 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 947934 /var/tmp/spdk2.sock 00:07:34.032 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:34.032 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 947934 /var/tmp/spdk2.sock 00:07:34.032 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:34.032 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.032 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:34.032 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.032 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 947934 /var/tmp/spdk2.sock 00:07:34.032 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 947934 ']' 00:07:34.032 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:34.032 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.032 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:34.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:34.032 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.032 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.032 [2024-12-08 06:11:24.065572] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:34.032 [2024-12-08 06:11:24.065651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947934 ] 00:07:34.291 [2024-12-08 06:11:24.169738] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 947893 has claimed it. 00:07:34.291 [2024-12-08 06:11:24.169815] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:34.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (947934) - No such process 00:07:34.918 ERROR: process (pid: 947934) is no longer running 00:07:34.918 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.918 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:34.918 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:34.918 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:34.918 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:34.918 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:34.918 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 947893 00:07:34.918 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 947893 00:07:34.918 06:11:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:35.175 lslocks: write error 00:07:35.175 06:11:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 947893 00:07:35.175 06:11:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 947893 ']' 00:07:35.175 06:11:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 947893 00:07:35.175 06:11:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:35.175 06:11:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.175 06:11:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 947893 00:07:35.175 06:11:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.175 06:11:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.175 06:11:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 947893' 00:07:35.175 killing process with pid 947893 00:07:35.175 06:11:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 947893 00:07:35.175 06:11:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 947893 00:07:35.432 00:07:35.432 real 0m1.902s 00:07:35.432 user 0m2.114s 00:07:35.432 sys 0m0.606s 00:07:35.432 06:11:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.432 06:11:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.432 ************************************ 00:07:35.432 END TEST locking_app_on_locked_coremask 00:07:35.432 ************************************ 00:07:35.432 06:11:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:35.432 06:11:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.432 06:11:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.432 06:11:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.432 ************************************ 00:07:35.432 START TEST locking_overlapped_coremask 00:07:35.432 ************************************ 00:07:35.432 06:11:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:35.432 06:11:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=948109 00:07:35.432 06:11:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:35.432 06:11:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 948109 /var/tmp/spdk.sock 00:07:35.432 06:11:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 948109 ']' 00:07:35.432 06:11:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.432 06:11:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.432 06:11:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.432 06:11:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.433 06:11:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.690 [2024-12-08 06:11:25.605512] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:35.690 [2024-12-08 06:11:25.605628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948109 ] 00:07:35.690 [2024-12-08 06:11:25.672673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.690 [2024-12-08 06:11:25.735156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.690 [2024-12-08 06:11:25.735216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.690 [2024-12-08 06:11:25.735220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.946 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.946 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:35.946 06:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=948229 00:07:35.946 06:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 948229 /var/tmp/spdk2.sock 00:07:35.946 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:35.946 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 948229 /var/tmp/spdk2.sock 00:07:35.946 06:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:35.946 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:35.946 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.946 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:35.946 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.946 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 948229 /var/tmp/spdk2.sock 00:07:35.946 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 948229 ']' 00:07:35.946 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:35.946 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.946 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:35.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:35.946 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.946 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.946 [2024-12-08 06:11:26.061840] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:35.946 [2024-12-08 06:11:26.061934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948229 ] 00:07:36.205 [2024-12-08 06:11:26.167673] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 948109 has claimed it. 00:07:36.205 [2024-12-08 06:11:26.167743] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:36.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (948229) - No such process 00:07:36.773 ERROR: process (pid: 948229) is no longer running 00:07:36.773 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.773 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:36.773 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:36.773 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:36.773 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:36.773 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:36.773 06:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:36.773 06:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:36.773 06:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:36.773 06:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:36.773 06:11:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 948109 00:07:36.773 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 948109 ']' 00:07:36.773 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 948109 00:07:36.773 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:36.773 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.773 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 948109 00:07:36.773 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.774 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.774 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 948109' 00:07:36.774 killing process with pid 948109 00:07:36.774 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 948109 00:07:36.774 06:11:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 948109 00:07:37.342 00:07:37.342 real 0m1.681s 00:07:37.342 user 0m4.698s 00:07:37.342 sys 0m0.448s 00:07:37.342 06:11:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.342 06:11:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.342 ************************************ 00:07:37.342 END TEST locking_overlapped_coremask 00:07:37.342 ************************************ 00:07:37.342 06:11:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:37.342 06:11:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.342 06:11:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.342 06:11:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.342 ************************************ 00:07:37.342 START TEST locking_overlapped_coremask_via_rpc 00:07:37.342 ************************************ 00:07:37.342 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:37.342 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=948395 00:07:37.342 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:37.342 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 948395 /var/tmp/spdk.sock 00:07:37.342 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 948395 ']' 00:07:37.342 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.342 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.342 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.342 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.342 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.342 [2024-12-08 06:11:27.335237] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:37.342 [2024-12-08 06:11:27.335347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948395 ] 00:07:37.342 [2024-12-08 06:11:27.405291] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:37.342 [2024-12-08 06:11:27.405336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:37.602 [2024-12-08 06:11:27.467410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.602 [2024-12-08 06:11:27.467466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.602 [2024-12-08 06:11:27.467469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.861 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.861 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:37.861 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=948412 00:07:37.861 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 948412 /var/tmp/spdk2.sock 00:07:37.861 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 948412 ']' 00:07:37.861 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.861 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.861 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.861 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:37.861 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.861 06:11:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.861 [2024-12-08 06:11:27.800069] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:37.861 [2024-12-08 06:11:27.800149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948412 ] 00:07:37.861 [2024-12-08 06:11:27.903016] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:37.861 [2024-12-08 06:11:27.903065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.120 [2024-12-08 06:11:28.024284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.120 [2024-12-08 06:11:28.027819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:38.120 [2024-12-08 06:11:28.027822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.692 [2024-12-08 06:11:28.790824] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 948395 has claimed it. 00:07:38.692 request: 00:07:38.692 { 00:07:38.692 "method": "framework_enable_cpumask_locks", 00:07:38.692 "req_id": 1 00:07:38.692 } 00:07:38.692 Got JSON-RPC error response 00:07:38.692 response: 00:07:38.692 { 00:07:38.692 "code": -32603, 00:07:38.692 "message": "Failed to claim CPU core: 2" 00:07:38.692 } 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 948395 /var/tmp/spdk.sock 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 948395 ']' 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.692 06:11:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.951 06:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.951 06:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:38.951 06:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 948412 /var/tmp/spdk2.sock 00:07:38.951 06:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 948412 ']' 00:07:38.951 06:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:38.951 06:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.951 06:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:38.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:38.951 06:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.951 06:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.521 06:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.522 06:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:39.522 06:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:39.522 06:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:39.522 06:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:39.522 06:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:39.522 00:07:39.522 real 0m2.066s 00:07:39.522 user 0m1.161s 00:07:39.522 sys 0m0.167s 00:07:39.522 06:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.522 06:11:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.522 ************************************ 00:07:39.522 END TEST locking_overlapped_coremask_via_rpc 00:07:39.522 ************************************ 00:07:39.522 06:11:29 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:39.522 06:11:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 948395 ]] 00:07:39.522 06:11:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 948395 00:07:39.522 06:11:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 948395 ']' 00:07:39.522 06:11:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 948395 00:07:39.522 06:11:29 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:39.522 06:11:29 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.522 06:11:29 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 948395 00:07:39.522 06:11:29 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.522 06:11:29 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.522 06:11:29 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 948395' 00:07:39.522 killing process with pid 948395 00:07:39.522 06:11:29 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 948395 00:07:39.522 06:11:29 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 948395 00:07:39.781 06:11:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 948412 ]] 00:07:39.781 06:11:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 948412 00:07:39.781 06:11:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 948412 ']' 00:07:39.781 06:11:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 948412 00:07:39.781 06:11:29 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:39.781 06:11:29 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.781 06:11:29 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 948412 00:07:39.782 06:11:29 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:39.782 06:11:29 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:39.782 06:11:29 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 948412' 00:07:39.782 killing process with pid 948412 00:07:39.782 06:11:29 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 948412 00:07:39.782 06:11:29 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 948412 00:07:40.351 06:11:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:40.351 06:11:30 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:40.351 06:11:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 948395 ]] 00:07:40.351 06:11:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 948395 00:07:40.351 06:11:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 948395 ']' 00:07:40.351 06:11:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 948395 00:07:40.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (948395) - No such process 00:07:40.351 06:11:30 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 948395 is not found' 00:07:40.351 Process with pid 948395 is not found 00:07:40.351 06:11:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 948412 ]] 00:07:40.351 06:11:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 948412 00:07:40.351 06:11:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 948412 ']' 00:07:40.351 06:11:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 948412 00:07:40.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (948412) - No such process 00:07:40.351 06:11:30 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 948412 is not found' 00:07:40.351 Process with pid 948412 is not found 00:07:40.351 06:11:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:40.351 00:07:40.351 real 0m15.890s 00:07:40.351 user 0m28.877s 00:07:40.351 sys 0m5.264s 00:07:40.351 06:11:30 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.351 06:11:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.351 ************************************ 00:07:40.351 END TEST cpu_locks 00:07:40.351 ************************************ 00:07:40.351 00:07:40.351 real 0m40.616s 00:07:40.351 user 1m19.639s 00:07:40.351 sys 0m9.224s 00:07:40.351 06:11:30 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.351 06:11:30 event -- common/autotest_common.sh@10 -- # set +x 00:07:40.351 ************************************ 00:07:40.351 END TEST event 00:07:40.351 ************************************ 00:07:40.351 06:11:30 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:40.351 06:11:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.351 06:11:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.351 06:11:30 -- common/autotest_common.sh@10 -- # set +x 00:07:40.351 ************************************ 00:07:40.351 START TEST thread 00:07:40.351 ************************************ 00:07:40.351 06:11:30 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:40.351 * Looking for test storage... 00:07:40.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:40.351 06:11:30 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:40.351 06:11:30 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:40.352 06:11:30 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:40.612 06:11:30 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:40.612 06:11:30 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.612 06:11:30 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.612 06:11:30 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.612 06:11:30 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.612 06:11:30 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.612 06:11:30 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.612 06:11:30 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.612 06:11:30 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.612 06:11:30 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.612 06:11:30 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.612 06:11:30 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.612 06:11:30 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:40.612 06:11:30 thread -- scripts/common.sh@345 -- # : 1 00:07:40.612 06:11:30 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.612 06:11:30 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.612 06:11:30 thread -- scripts/common.sh@365 -- # decimal 1 00:07:40.612 06:11:30 thread -- scripts/common.sh@353 -- # local d=1 00:07:40.612 06:11:30 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.612 06:11:30 thread -- scripts/common.sh@355 -- # echo 1 00:07:40.612 06:11:30 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.612 06:11:30 thread -- scripts/common.sh@366 -- # decimal 2 00:07:40.612 06:11:30 thread -- scripts/common.sh@353 -- # local d=2 00:07:40.612 06:11:30 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.612 06:11:30 thread -- scripts/common.sh@355 -- # echo 2 00:07:40.612 06:11:30 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.612 06:11:30 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.612 06:11:30 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.612 06:11:30 thread -- scripts/common.sh@368 -- # return 0 00:07:40.612 06:11:30 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.612 06:11:30 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:40.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.612 --rc genhtml_branch_coverage=1 00:07:40.612 --rc genhtml_function_coverage=1 00:07:40.612 --rc genhtml_legend=1 00:07:40.612 --rc geninfo_all_blocks=1 00:07:40.612 --rc geninfo_unexecuted_blocks=1 00:07:40.612 00:07:40.612 ' 00:07:40.612 06:11:30 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:40.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.612 --rc genhtml_branch_coverage=1 00:07:40.612 --rc genhtml_function_coverage=1 00:07:40.612 --rc genhtml_legend=1 00:07:40.612 --rc geninfo_all_blocks=1 00:07:40.612 --rc geninfo_unexecuted_blocks=1 00:07:40.612 00:07:40.612 ' 00:07:40.612 06:11:30 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:40.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.612 --rc genhtml_branch_coverage=1 00:07:40.612 --rc genhtml_function_coverage=1 00:07:40.612 --rc genhtml_legend=1 00:07:40.612 --rc geninfo_all_blocks=1 00:07:40.612 --rc geninfo_unexecuted_blocks=1 00:07:40.612 00:07:40.612 ' 00:07:40.612 06:11:30 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:40.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.612 --rc genhtml_branch_coverage=1 00:07:40.612 --rc genhtml_function_coverage=1 00:07:40.612 --rc genhtml_legend=1 00:07:40.612 --rc geninfo_all_blocks=1 00:07:40.612 --rc geninfo_unexecuted_blocks=1 00:07:40.612 00:07:40.612 ' 00:07:40.612 06:11:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:40.612 06:11:30 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:40.612 06:11:30 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.612 06:11:30 thread -- common/autotest_common.sh@10 -- # set +x 00:07:40.612 ************************************ 00:07:40.612 START TEST thread_poller_perf 00:07:40.612 ************************************ 00:07:40.612 06:11:30 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:40.612 [2024-12-08 06:11:30.577186] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:40.612 [2024-12-08 06:11:30.577250] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948908 ] 00:07:40.612 [2024-12-08 06:11:30.642391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.612 [2024-12-08 06:11:30.697378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.612 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:41.987 [2024-12-08T05:11:32.106Z] ====================================== 00:07:41.987 [2024-12-08T05:11:32.106Z] busy:2713228815 (cyc) 00:07:41.987 [2024-12-08T05:11:32.106Z] total_run_count: 365000 00:07:41.987 [2024-12-08T05:11:32.106Z] tsc_hz: 2700000000 (cyc) 00:07:41.987 [2024-12-08T05:11:32.106Z] ====================================== 00:07:41.987 [2024-12-08T05:11:32.106Z] poller_cost: 7433 (cyc), 2752 (nsec) 00:07:41.987 00:07:41.987 real 0m1.205s 00:07:41.987 user 0m1.133s 00:07:41.987 sys 0m0.067s 00:07:41.987 06:11:31 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.987 06:11:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:41.987 ************************************ 00:07:41.987 END TEST thread_poller_perf 00:07:41.987 ************************************ 00:07:41.987 06:11:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:41.987 06:11:31 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:41.987 06:11:31 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.987 06:11:31 thread -- common/autotest_common.sh@10 -- # set +x 00:07:41.987 ************************************ 00:07:41.987 START TEST thread_poller_perf 00:07:41.987 ************************************ 00:07:41.987 06:11:31 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:41.987 [2024-12-08 06:11:31.828787] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:41.987 [2024-12-08 06:11:31.828848] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949063 ] 00:07:41.987 [2024-12-08 06:11:31.893371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.987 [2024-12-08 06:11:31.949352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.987 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:42.920 [2024-12-08T05:11:33.039Z] ====================================== 00:07:42.920 [2024-12-08T05:11:33.039Z] busy:2702328771 (cyc) 00:07:42.920 [2024-12-08T05:11:33.039Z] total_run_count: 4308000 00:07:42.920 [2024-12-08T05:11:33.039Z] tsc_hz: 2700000000 (cyc) 00:07:42.920 [2024-12-08T05:11:33.040Z] ====================================== 00:07:42.921 [2024-12-08T05:11:33.040Z] poller_cost: 627 (cyc), 232 (nsec) 00:07:42.921 00:07:42.921 real 0m1.198s 00:07:42.921 user 0m1.127s 00:07:42.921 sys 0m0.066s 00:07:42.921 06:11:33 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.921 06:11:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:42.921 ************************************ 00:07:42.921 END TEST thread_poller_perf 00:07:42.921 ************************************ 00:07:42.921 06:11:33 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:42.921 00:07:42.921 real 0m2.647s 00:07:42.921 user 0m2.392s 00:07:42.921 sys 0m0.260s 00:07:42.921 06:11:33 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.921 06:11:33 thread -- common/autotest_common.sh@10 -- # set +x 00:07:42.921 ************************************ 00:07:42.921 END TEST thread 00:07:42.921 ************************************ 00:07:43.178 06:11:33 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:43.178 06:11:33 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:43.178 06:11:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.178 06:11:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.178 06:11:33 -- common/autotest_common.sh@10 -- # set +x 00:07:43.178 ************************************ 00:07:43.178 START TEST app_cmdline 00:07:43.178 ************************************ 00:07:43.178 06:11:33 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:43.178 * Looking for test storage... 00:07:43.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:43.178 06:11:33 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:43.178 06:11:33 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:43.178 06:11:33 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:43.178 06:11:33 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.178 06:11:33 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:43.178 06:11:33 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.178 06:11:33 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:43.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.178 --rc genhtml_branch_coverage=1 00:07:43.178 --rc genhtml_function_coverage=1 00:07:43.178 --rc genhtml_legend=1 00:07:43.178 --rc geninfo_all_blocks=1 00:07:43.178 --rc geninfo_unexecuted_blocks=1 00:07:43.178 00:07:43.178 ' 00:07:43.178 06:11:33 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:43.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.178 --rc genhtml_branch_coverage=1 00:07:43.178 --rc genhtml_function_coverage=1 00:07:43.178 --rc genhtml_legend=1 00:07:43.178 --rc geninfo_all_blocks=1 00:07:43.178 --rc geninfo_unexecuted_blocks=1 00:07:43.178 00:07:43.178 ' 00:07:43.178 06:11:33 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:43.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.178 --rc genhtml_branch_coverage=1 00:07:43.178 --rc genhtml_function_coverage=1 00:07:43.178 --rc genhtml_legend=1 00:07:43.178 --rc geninfo_all_blocks=1 00:07:43.178 --rc geninfo_unexecuted_blocks=1 00:07:43.178 00:07:43.178 ' 00:07:43.178 06:11:33 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:43.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.178 --rc genhtml_branch_coverage=1 00:07:43.178 --rc genhtml_function_coverage=1 00:07:43.178 --rc genhtml_legend=1 00:07:43.178 --rc geninfo_all_blocks=1 00:07:43.178 --rc geninfo_unexecuted_blocks=1 00:07:43.178 00:07:43.178 ' 00:07:43.178 06:11:33 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:43.178 06:11:33 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=949264 00:07:43.178 06:11:33 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:43.178 06:11:33 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 949264 00:07:43.178 06:11:33 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 949264 ']' 00:07:43.178 06:11:33 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.178 06:11:33 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.178 06:11:33 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.178 06:11:33 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.179 06:11:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:43.179 [2024-12-08 06:11:33.290823] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:43.179 [2024-12-08 06:11:33.290905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949264 ] 00:07:43.437 [2024-12-08 06:11:33.359647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.437 [2024-12-08 06:11:33.416537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.696 06:11:33 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.696 06:11:33 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:43.696 06:11:33 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:43.954 { 00:07:43.954 "version": "SPDK v25.01-pre git sha1 c0f3f2d18", 00:07:43.954 "fields": { 00:07:43.954 "major": 25, 00:07:43.954 "minor": 1, 00:07:43.954 "patch": 0, 00:07:43.954 "suffix": "-pre", 00:07:43.954 "commit": "c0f3f2d18" 00:07:43.954 } 00:07:43.954 } 00:07:43.954 06:11:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:43.954 06:11:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:43.954 06:11:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:43.954 06:11:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:43.954 06:11:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:43.954 06:11:33 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.954 06:11:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:43.954 06:11:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:43.954 06:11:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:43.954 06:11:33 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.954 06:11:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:43.954 06:11:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:43.954 06:11:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:43.954 06:11:33 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:43.954 06:11:33 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:43.954 06:11:33 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.954 06:11:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.954 06:11:33 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.954 06:11:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.954 06:11:33 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.954 06:11:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.954 06:11:33 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.954 06:11:33 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:43.954 06:11:33 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:44.213 request: 00:07:44.213 { 00:07:44.213 "method": "env_dpdk_get_mem_stats", 00:07:44.213 "req_id": 1 00:07:44.213 } 00:07:44.213 Got JSON-RPC error response 00:07:44.213 response: 00:07:44.213 { 00:07:44.213 "code": -32601, 00:07:44.213 "message": "Method not found" 00:07:44.213 } 00:07:44.213 06:11:34 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:44.213 06:11:34 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:44.213 06:11:34 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:44.213 06:11:34 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:44.213 06:11:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 949264 00:07:44.213 06:11:34 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 949264 ']' 00:07:44.213 06:11:34 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 949264 00:07:44.213 06:11:34 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:44.213 06:11:34 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.213 06:11:34 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 949264 00:07:44.213 06:11:34 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.213 06:11:34 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.213 06:11:34 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 949264' 00:07:44.213 killing process with pid 949264 00:07:44.213 06:11:34 app_cmdline -- common/autotest_common.sh@973 -- # kill 949264 00:07:44.213 06:11:34 app_cmdline -- common/autotest_common.sh@978 -- # wait 949264 00:07:44.852 00:07:44.852 real 0m1.622s 00:07:44.852 user 0m2.004s 00:07:44.852 sys 0m0.472s 00:07:44.852 06:11:34 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.852 06:11:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:44.852 ************************************ 00:07:44.852 END TEST app_cmdline 00:07:44.852 ************************************ 00:07:44.852 06:11:34 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:44.852 06:11:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.852 06:11:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.852 06:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:44.852 ************************************ 00:07:44.852 START TEST version 00:07:44.852 ************************************ 00:07:44.852 06:11:34 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:44.852 * Looking for test storage... 00:07:44.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:44.852 06:11:34 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:44.852 06:11:34 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:44.853 06:11:34 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:44.853 06:11:34 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:44.853 06:11:34 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.853 06:11:34 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.853 06:11:34 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.853 06:11:34 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.853 06:11:34 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.853 06:11:34 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.853 06:11:34 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.853 06:11:34 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.853 06:11:34 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.853 06:11:34 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.853 06:11:34 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.853 06:11:34 version -- scripts/common.sh@344 -- # case "$op" in 00:07:44.853 06:11:34 version -- scripts/common.sh@345 -- # : 1 00:07:44.853 06:11:34 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.853 06:11:34 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.853 06:11:34 version -- scripts/common.sh@365 -- # decimal 1 00:07:44.853 06:11:34 version -- scripts/common.sh@353 -- # local d=1 00:07:44.853 06:11:34 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.853 06:11:34 version -- scripts/common.sh@355 -- # echo 1 00:07:44.853 06:11:34 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.853 06:11:34 version -- scripts/common.sh@366 -- # decimal 2 00:07:44.853 06:11:34 version -- scripts/common.sh@353 -- # local d=2 00:07:44.853 06:11:34 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.853 06:11:34 version -- scripts/common.sh@355 -- # echo 2 00:07:44.853 06:11:34 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.853 06:11:34 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.853 06:11:34 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.853 06:11:34 version -- scripts/common.sh@368 -- # return 0 00:07:44.853 06:11:34 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.853 06:11:34 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:44.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.853 --rc genhtml_branch_coverage=1 00:07:44.853 --rc genhtml_function_coverage=1 00:07:44.853 --rc genhtml_legend=1 00:07:44.853 --rc geninfo_all_blocks=1 00:07:44.853 --rc geninfo_unexecuted_blocks=1 00:07:44.853 00:07:44.853 ' 00:07:44.853 06:11:34 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:44.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.853 --rc genhtml_branch_coverage=1 00:07:44.853 --rc genhtml_function_coverage=1 00:07:44.853 --rc genhtml_legend=1 00:07:44.853 --rc geninfo_all_blocks=1 00:07:44.853 --rc geninfo_unexecuted_blocks=1 00:07:44.853 00:07:44.853 ' 00:07:44.853 06:11:34 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:44.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.853 --rc genhtml_branch_coverage=1 00:07:44.853 --rc genhtml_function_coverage=1 00:07:44.853 --rc genhtml_legend=1 00:07:44.853 --rc geninfo_all_blocks=1 00:07:44.853 --rc geninfo_unexecuted_blocks=1 00:07:44.853 00:07:44.853 ' 00:07:44.853 06:11:34 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:44.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.853 --rc genhtml_branch_coverage=1 00:07:44.853 --rc genhtml_function_coverage=1 00:07:44.853 --rc genhtml_legend=1 00:07:44.853 --rc geninfo_all_blocks=1 00:07:44.853 --rc geninfo_unexecuted_blocks=1 00:07:44.853 00:07:44.853 ' 00:07:44.853 06:11:34 version -- app/version.sh@17 -- # get_header_version major 00:07:44.853 06:11:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:44.853 06:11:34 version -- app/version.sh@14 -- # cut -f2 00:07:44.853 06:11:34 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.853 06:11:34 version -- app/version.sh@17 -- # major=25 00:07:44.853 06:11:34 version -- app/version.sh@18 -- # get_header_version minor 00:07:44.853 06:11:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:44.853 06:11:34 version -- app/version.sh@14 -- # cut -f2 00:07:44.853 06:11:34 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.853 06:11:34 version -- app/version.sh@18 -- # minor=1 00:07:44.853 06:11:34 version -- app/version.sh@19 -- # get_header_version patch 00:07:44.853 06:11:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:44.853 06:11:34 version -- app/version.sh@14 -- # cut -f2 00:07:44.853 06:11:34 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.853 06:11:34 version -- app/version.sh@19 -- # patch=0 00:07:44.853 06:11:34 version -- app/version.sh@20 -- # get_header_version suffix 00:07:44.853 06:11:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:44.853 06:11:34 version -- app/version.sh@14 -- # cut -f2 00:07:44.853 06:11:34 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.853 06:11:34 version -- app/version.sh@20 -- # suffix=-pre 00:07:44.853 06:11:34 version -- app/version.sh@22 -- # version=25.1 00:07:44.853 06:11:34 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:44.853 06:11:34 version -- app/version.sh@28 -- # version=25.1rc0 00:07:44.853 06:11:34 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:44.853 06:11:34 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:44.853 06:11:34 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:44.853 06:11:34 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:44.853 00:07:44.853 real 0m0.193s 00:07:44.853 user 0m0.128s 00:07:44.853 sys 0m0.091s 00:07:44.853 06:11:34 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.853 06:11:34 version -- common/autotest_common.sh@10 -- # set +x 00:07:44.853 ************************************ 00:07:44.853 END TEST version 00:07:44.853 ************************************ 00:07:45.125 06:11:34 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:45.125 06:11:34 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:45.125 06:11:34 -- spdk/autotest.sh@194 -- # uname -s 00:07:45.125 06:11:34 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:45.125 06:11:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:45.125 06:11:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:45.125 06:11:34 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:45.125 06:11:34 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:45.125 06:11:34 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:45.125 06:11:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:45.125 06:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:45.125 06:11:34 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:45.125 06:11:34 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:45.125 06:11:34 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:45.125 06:11:34 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:45.125 06:11:34 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:45.125 06:11:34 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:45.125 06:11:34 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:45.125 06:11:34 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.125 06:11:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.125 06:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:45.125 ************************************ 00:07:45.125 START TEST nvmf_tcp 00:07:45.125 ************************************ 00:07:45.125 06:11:35 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:45.125 * Looking for test storage... 00:07:45.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:45.125 06:11:35 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.125 06:11:35 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.125 06:11:35 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.125 06:11:35 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.125 06:11:35 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:45.125 06:11:35 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.125 06:11:35 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.125 --rc genhtml_branch_coverage=1 00:07:45.125 --rc genhtml_function_coverage=1 00:07:45.125 --rc genhtml_legend=1 00:07:45.125 --rc geninfo_all_blocks=1 00:07:45.125 --rc geninfo_unexecuted_blocks=1 00:07:45.125 00:07:45.125 ' 00:07:45.125 06:11:35 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.125 --rc genhtml_branch_coverage=1 00:07:45.125 --rc genhtml_function_coverage=1 00:07:45.125 --rc genhtml_legend=1 00:07:45.125 --rc geninfo_all_blocks=1 00:07:45.125 --rc geninfo_unexecuted_blocks=1 00:07:45.125 00:07:45.125 ' 00:07:45.125 06:11:35 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.125 --rc genhtml_branch_coverage=1 00:07:45.125 --rc genhtml_function_coverage=1 00:07:45.125 --rc genhtml_legend=1 00:07:45.125 --rc geninfo_all_blocks=1 00:07:45.125 --rc geninfo_unexecuted_blocks=1 00:07:45.125 00:07:45.125 ' 00:07:45.125 06:11:35 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.125 --rc genhtml_branch_coverage=1 00:07:45.125 --rc genhtml_function_coverage=1 00:07:45.125 --rc genhtml_legend=1 00:07:45.125 --rc geninfo_all_blocks=1 00:07:45.125 --rc geninfo_unexecuted_blocks=1 00:07:45.125 00:07:45.125 ' 00:07:45.125 06:11:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:45.125 06:11:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:45.125 06:11:35 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:45.125 06:11:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.125 06:11:35 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.125 06:11:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.125 ************************************ 00:07:45.125 START TEST nvmf_target_core 00:07:45.125 ************************************ 00:07:45.125 06:11:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:45.125 * Looking for test storage... 00:07:45.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:45.125 06:11:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.125 06:11:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.125 06:11:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.385 --rc genhtml_branch_coverage=1 00:07:45.385 --rc genhtml_function_coverage=1 00:07:45.385 --rc genhtml_legend=1 00:07:45.385 --rc geninfo_all_blocks=1 00:07:45.385 --rc geninfo_unexecuted_blocks=1 00:07:45.385 00:07:45.385 ' 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.385 --rc genhtml_branch_coverage=1 00:07:45.385 --rc genhtml_function_coverage=1 00:07:45.385 --rc genhtml_legend=1 00:07:45.385 --rc geninfo_all_blocks=1 00:07:45.385 --rc geninfo_unexecuted_blocks=1 00:07:45.385 00:07:45.385 ' 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.385 --rc genhtml_branch_coverage=1 00:07:45.385 --rc genhtml_function_coverage=1 00:07:45.385 --rc genhtml_legend=1 00:07:45.385 --rc geninfo_all_blocks=1 00:07:45.385 --rc geninfo_unexecuted_blocks=1 00:07:45.385 00:07:45.385 ' 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.385 --rc genhtml_branch_coverage=1 00:07:45.385 --rc genhtml_function_coverage=1 00:07:45.385 --rc genhtml_legend=1 00:07:45.385 --rc geninfo_all_blocks=1 00:07:45.385 --rc geninfo_unexecuted_blocks=1 00:07:45.385 00:07:45.385 ' 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:45.385 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.386 ************************************ 00:07:45.386 START TEST nvmf_abort 00:07:45.386 ************************************ 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:45.386 * Looking for test storage... 00:07:45.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.386 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.644 --rc genhtml_branch_coverage=1 00:07:45.644 --rc genhtml_function_coverage=1 00:07:45.644 --rc genhtml_legend=1 00:07:45.644 --rc geninfo_all_blocks=1 00:07:45.644 --rc geninfo_unexecuted_blocks=1 00:07:45.644 00:07:45.644 ' 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.644 --rc genhtml_branch_coverage=1 00:07:45.644 --rc genhtml_function_coverage=1 00:07:45.644 --rc genhtml_legend=1 00:07:45.644 --rc geninfo_all_blocks=1 00:07:45.644 --rc geninfo_unexecuted_blocks=1 00:07:45.644 00:07:45.644 ' 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.644 --rc genhtml_branch_coverage=1 00:07:45.644 --rc genhtml_function_coverage=1 00:07:45.644 --rc genhtml_legend=1 00:07:45.644 --rc geninfo_all_blocks=1 00:07:45.644 --rc geninfo_unexecuted_blocks=1 00:07:45.644 00:07:45.644 ' 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.644 --rc genhtml_branch_coverage=1 00:07:45.644 --rc genhtml_function_coverage=1 00:07:45.644 --rc genhtml_legend=1 00:07:45.644 --rc geninfo_all_blocks=1 00:07:45.644 --rc geninfo_unexecuted_blocks=1 00:07:45.644 00:07:45.644 ' 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:45.644 06:11:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.179 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:48.180 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:48.180 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:48.180 Found net devices under 0000:84:00.0: cvl_0_0 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:48.180 Found net devices under 0000:84:00.1: cvl_0_1 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:48.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:07:48.180 00:07:48.180 --- 10.0.0.2 ping statistics --- 00:07:48.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.180 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:07:48.180 00:07:48.180 --- 10.0.0.1 ping statistics --- 00:07:48.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.180 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=951377 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 951377 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 951377 ']' 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.180 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.181 06:11:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.181 [2024-12-08 06:11:37.946322] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:48.181 [2024-12-08 06:11:37.946391] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.181 [2024-12-08 06:11:38.017982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.181 [2024-12-08 06:11:38.079592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.181 [2024-12-08 06:11:38.079664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.181 [2024-12-08 06:11:38.079692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.181 [2024-12-08 06:11:38.079703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.181 [2024-12-08 06:11:38.079713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.181 [2024-12-08 06:11:38.081477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.181 [2024-12-08 06:11:38.081542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.181 [2024-12-08 06:11:38.081546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.181 [2024-12-08 06:11:38.235308] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.181 Malloc0 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.181 Delay0 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.181 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.441 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.441 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:48.441 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.441 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.441 [2024-12-08 06:11:38.305412] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.441 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.441 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.441 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.441 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.441 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.441 06:11:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:48.441 [2024-12-08 06:11:38.410849] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:50.979 Initializing NVMe Controllers 00:07:50.979 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:50.979 controller IO queue size 128 less than required 00:07:50.979 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:50.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:50.979 Initialization complete. Launching workers. 00:07:50.979 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28299 00:07:50.979 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28360, failed to submit 62 00:07:50.979 success 28303, unsuccessful 57, failed 0 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:50.979 rmmod nvme_tcp 00:07:50.979 rmmod nvme_fabrics 00:07:50.979 rmmod nvme_keyring 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 951377 ']' 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 951377 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 951377 ']' 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 951377 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 951377 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 951377' 00:07:50.979 killing process with pid 951377 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 951377 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 951377 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.979 06:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.885 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:52.885 00:07:52.885 real 0m7.538s 00:07:52.885 user 0m10.709s 00:07:52.885 sys 0m2.773s 00:07:52.885 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.885 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.885 ************************************ 00:07:52.885 END TEST nvmf_abort 00:07:52.885 ************************************ 00:07:52.885 06:11:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:52.885 06:11:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:52.885 06:11:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.885 06:11:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.885 ************************************ 00:07:52.885 START TEST nvmf_ns_hotplug_stress 00:07:52.885 ************************************ 00:07:52.885 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:53.146 * Looking for test storage... 00:07:53.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:53.146 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:53.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.147 --rc genhtml_branch_coverage=1 00:07:53.147 --rc genhtml_function_coverage=1 00:07:53.147 --rc genhtml_legend=1 00:07:53.147 --rc geninfo_all_blocks=1 00:07:53.147 --rc geninfo_unexecuted_blocks=1 00:07:53.147 00:07:53.147 ' 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:53.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.147 --rc genhtml_branch_coverage=1 00:07:53.147 --rc genhtml_function_coverage=1 00:07:53.147 --rc genhtml_legend=1 00:07:53.147 --rc geninfo_all_blocks=1 00:07:53.147 --rc geninfo_unexecuted_blocks=1 00:07:53.147 00:07:53.147 ' 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:53.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.147 --rc genhtml_branch_coverage=1 00:07:53.147 --rc genhtml_function_coverage=1 00:07:53.147 --rc genhtml_legend=1 00:07:53.147 --rc geninfo_all_blocks=1 00:07:53.147 --rc geninfo_unexecuted_blocks=1 00:07:53.147 00:07:53.147 ' 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:53.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.147 --rc genhtml_branch_coverage=1 00:07:53.147 --rc genhtml_function_coverage=1 00:07:53.147 --rc genhtml_legend=1 00:07:53.147 --rc geninfo_all_blocks=1 00:07:53.147 --rc geninfo_unexecuted_blocks=1 00:07:53.147 00:07:53.147 ' 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:53.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:53.147 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:55.681 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:55.681 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:55.681 Found net devices under 0000:84:00.0: cvl_0_0 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.681 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:55.682 Found net devices under 0000:84:00.1: cvl_0_1 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:55.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:55.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:07:55.682 00:07:55.682 --- 10.0.0.2 ping statistics --- 00:07:55.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.682 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:55.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:55.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:07:55.682 00:07:55.682 --- 10.0.0.1 ping statistics --- 00:07:55.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.682 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=953749 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 953749 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 953749 ']' 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.682 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:55.682 [2024-12-08 06:11:45.546444] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:07:55.682 [2024-12-08 06:11:45.546556] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.682 [2024-12-08 06:11:45.619536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:55.683 [2024-12-08 06:11:45.678420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.683 [2024-12-08 06:11:45.678489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.683 [2024-12-08 06:11:45.678518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.683 [2024-12-08 06:11:45.678530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.683 [2024-12-08 06:11:45.678540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.683 [2024-12-08 06:11:45.680268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.683 [2024-12-08 06:11:45.680396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.683 [2024-12-08 06:11:45.680400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.940 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.940 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:55.940 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:55.940 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:55.940 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:55.940 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.940 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:55.940 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:56.197 [2024-12-08 06:11:46.074285] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.197 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:56.455 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.712 [2024-12-08 06:11:46.625163] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.712 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:56.969 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:57.227 Malloc0 00:07:57.227 06:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:57.484 Delay0 00:07:57.484 06:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.741 06:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:57.998 NULL1 00:07:57.998 06:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:58.255 06:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=954054 00:07:58.255 06:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:58.255 06:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:07:58.255 06:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.513 06:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.814 06:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:58.814 06:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:59.070 true 00:07:59.070 06:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:07:59.070 06:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.328 06:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.585 06:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:59.585 06:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:59.844 true 00:07:59.844 06:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:07:59.844 06:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.413 06:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.413 06:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:00.413 06:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:00.981 true 00:08:00.981 06:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:00.981 06:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.920 Read completed with error (sct=0, sc=11) 00:08:01.920 06:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.920 06:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:01.920 06:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:02.179 true 00:08:02.179 06:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:02.179 06:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.437 06:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.696 06:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:02.696 06:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:03.264 true 00:08:03.264 06:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:03.264 06:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.834 06:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.092 06:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:04.092 06:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:04.350 true 00:08:04.350 06:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:04.350 06:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.609 06:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.867 06:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:04.867 06:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:05.133 true 00:08:05.133 06:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:05.133 06:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.390 06:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.646 06:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:05.646 06:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:05.902 true 00:08:05.902 06:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:05.902 06:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.832 06:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.090 06:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:07.090 06:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:07.655 true 00:08:07.655 06:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:07.655 06:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.655 06:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.220 06:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:08.220 06:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:08.220 true 00:08:08.220 06:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:08.220 06:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.155 06:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.413 06:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:09.413 06:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:09.671 true 00:08:09.671 06:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:09.671 06:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.929 06:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.187 06:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:10.187 06:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:10.445 true 00:08:10.445 06:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:10.445 06:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.012 06:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.270 06:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:11.270 06:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:11.528 true 00:08:11.528 06:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:11.528 06:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.478 06:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:12.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:12.736 06:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:12.736 06:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:12.993 true 00:08:12.993 06:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:12.993 06:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.250 06:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.508 06:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:13.508 06:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:13.765 true 00:08:13.765 06:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:13.765 06:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.023 06:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.281 06:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:14.281 06:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:14.538 true 00:08:14.538 06:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:14.538 06:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.475 06:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.732 06:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:15.732 06:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:15.989 true 00:08:15.989 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:15.989 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.247 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.520 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:16.520 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:16.780 true 00:08:16.780 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:16.780 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.365 06:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.365 06:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:17.365 06:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:17.931 true 00:08:17.932 06:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:17.932 06:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.866 06:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.866 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.124 06:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:19.124 06:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:19.382 true 00:08:19.382 06:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:19.382 06:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.640 06:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.898 06:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:19.898 06:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:20.156 true 00:08:20.156 06:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:20.156 06:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.413 06:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.671 06:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:20.671 06:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:20.929 true 00:08:20.929 06:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:20.929 06:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.867 06:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.867 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.124 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:22.124 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:22.382 true 00:08:22.382 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:22.382 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.639 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.896 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:22.896 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:23.153 true 00:08:23.153 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:23.153 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.411 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.669 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:23.669 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:23.927 true 00:08:23.927 06:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:23.927 06:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.861 06:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.119 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:25.119 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:25.377 true 00:08:25.377 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:25.377 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.634 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.201 06:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:26.201 06:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:26.201 true 00:08:26.201 06:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:26.201 06:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.459 06:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.717 06:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:26.717 06:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:26.975 true 00:08:27.233 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:27.233 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.171 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.430 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:28.430 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:28.688 true 00:08:28.688 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:28.688 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.688 Initializing NVMe Controllers 00:08:28.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:28.688 Controller IO queue size 128, less than required. 00:08:28.688 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:28.688 Controller IO queue size 128, less than required. 00:08:28.688 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:28.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:28.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:28.688 Initialization complete. Launching workers. 00:08:28.688 ======================================================== 00:08:28.688 Latency(us) 00:08:28.688 Device Information : IOPS MiB/s Average min max 00:08:28.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 629.79 0.31 76665.36 2257.33 1046696.88 00:08:28.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8178.23 3.99 15604.37 2794.60 542558.23 00:08:28.688 ======================================================== 00:08:28.688 Total : 8808.03 4.30 19970.36 2257.33 1046696.88 00:08:28.688 00:08:28.947 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.205 06:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:29.205 06:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:29.463 true 00:08:29.463 06:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954054 00:08:29.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (954054) - No such process 00:08:29.463 06:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 954054 00:08:29.463 06:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.721 06:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:29.979 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:29.979 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:29.979 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:29.979 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:29.979 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:30.240 null0 00:08:30.240 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:30.240 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:30.240 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:30.497 null1 00:08:30.497 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:30.497 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:30.497 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:30.756 null2 00:08:31.014 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:31.014 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.014 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:31.014 null3 00:08:31.272 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:31.272 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.272 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:31.529 null4 00:08:31.529 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:31.529 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.529 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:31.786 null5 00:08:31.786 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:31.786 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:31.786 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:32.043 null6 00:08:32.043 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.043 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.043 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:32.300 null7 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:32.300 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 958862 958863 958865 958867 958870 958874 958877 958879 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.301 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:32.558 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.558 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:32.558 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:32.558 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:32.558 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:32.558 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.558 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:32.558 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.815 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.073 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.073 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.073 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.073 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.073 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.073 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.073 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.073 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.330 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.330 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.330 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:33.330 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.330 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.330 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:33.588 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.588 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.588 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.588 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.588 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:33.588 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.588 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.588 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.588 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.588 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.588 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.588 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.588 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.588 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.588 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.588 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.588 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.588 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.845 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.845 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.845 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.845 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.845 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.845 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.845 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.845 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.104 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.362 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.362 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.362 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.362 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.362 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.362 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.362 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.362 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.622 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.880 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.880 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.880 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.880 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.880 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.880 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.880 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.880 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.138 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.138 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.138 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.138 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.138 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.138 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.396 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.396 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.396 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.396 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.396 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.396 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.396 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.396 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.396 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.396 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.396 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.396 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.396 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.396 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.396 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.396 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.396 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.396 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.669 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.669 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.670 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.670 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.670 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.670 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.670 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.670 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.929 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.188 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.188 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.188 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.188 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.188 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.188 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.188 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.188 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.447 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.706 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.706 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.706 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.706 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.706 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.706 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.706 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.706 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.965 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.965 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.965 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.965 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.965 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.965 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.965 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.965 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.965 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.965 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.965 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.965 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.965 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.965 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.965 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.965 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.965 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.965 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.224 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.224 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.224 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.224 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.224 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.224 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.224 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.482 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.482 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.482 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.482 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.482 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.482 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.482 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.741 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.000 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.000 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.000 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.000 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.000 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.000 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.000 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.000 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:38.258 rmmod nvme_tcp 00:08:38.258 rmmod nvme_fabrics 00:08:38.258 rmmod nvme_keyring 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 953749 ']' 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 953749 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 953749 ']' 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 953749 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.258 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 953749 00:08:38.516 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:38.516 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:38.516 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 953749' 00:08:38.516 killing process with pid 953749 00:08:38.516 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 953749 00:08:38.516 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 953749 00:08:38.776 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:38.776 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:38.776 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:38.776 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:38.776 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:38.776 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:38.776 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:38.776 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:38.776 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:38.776 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.776 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.776 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.707 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:40.707 00:08:40.707 real 0m47.732s 00:08:40.707 user 3m42.844s 00:08:40.707 sys 0m16.391s 00:08:40.707 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.707 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.707 ************************************ 00:08:40.707 END TEST nvmf_ns_hotplug_stress 00:08:40.707 ************************************ 00:08:40.707 06:12:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:40.707 06:12:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:40.707 06:12:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.707 06:12:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:40.707 ************************************ 00:08:40.707 START TEST nvmf_delete_subsystem 00:08:40.707 ************************************ 00:08:40.707 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:40.707 * Looking for test storage... 00:08:40.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.707 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:40.707 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:08:40.707 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:40.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.966 --rc genhtml_branch_coverage=1 00:08:40.966 --rc genhtml_function_coverage=1 00:08:40.966 --rc genhtml_legend=1 00:08:40.966 --rc geninfo_all_blocks=1 00:08:40.966 --rc geninfo_unexecuted_blocks=1 00:08:40.966 00:08:40.966 ' 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:40.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.966 --rc genhtml_branch_coverage=1 00:08:40.966 --rc genhtml_function_coverage=1 00:08:40.966 --rc genhtml_legend=1 00:08:40.966 --rc geninfo_all_blocks=1 00:08:40.966 --rc geninfo_unexecuted_blocks=1 00:08:40.966 00:08:40.966 ' 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:40.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.966 --rc genhtml_branch_coverage=1 00:08:40.966 --rc genhtml_function_coverage=1 00:08:40.966 --rc genhtml_legend=1 00:08:40.966 --rc geninfo_all_blocks=1 00:08:40.966 --rc geninfo_unexecuted_blocks=1 00:08:40.966 00:08:40.966 ' 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:40.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.966 --rc genhtml_branch_coverage=1 00:08:40.966 --rc genhtml_function_coverage=1 00:08:40.966 --rc genhtml_legend=1 00:08:40.966 --rc geninfo_all_blocks=1 00:08:40.966 --rc geninfo_unexecuted_blocks=1 00:08:40.966 00:08:40.966 ' 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.966 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:40.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:40.967 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:43.499 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:43.499 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.499 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:43.500 Found net devices under 0000:84:00.0: cvl_0_0 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:43.500 Found net devices under 0000:84:00.1: cvl_0_1 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:43.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:08:43.500 00:08:43.500 --- 10.0.0.2 ping statistics --- 00:08:43.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.500 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:08:43.500 00:08:43.500 --- 10.0.0.1 ping statistics --- 00:08:43.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.500 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=961787 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 961787 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 961787 ']' 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.500 [2024-12-08 06:12:33.240988] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:08:43.500 [2024-12-08 06:12:33.241117] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.500 [2024-12-08 06:12:33.313849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:43.500 [2024-12-08 06:12:33.372651] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.500 [2024-12-08 06:12:33.372713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.500 [2024-12-08 06:12:33.372751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.500 [2024-12-08 06:12:33.372765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.500 [2024-12-08 06:12:33.372774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.500 [2024-12-08 06:12:33.374361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.500 [2024-12-08 06:12:33.374367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.500 [2024-12-08 06:12:33.523752] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.500 [2024-12-08 06:12:33.540011] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.500 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:43.501 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.501 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.501 NULL1 00:08:43.501 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.501 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:43.501 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.501 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.501 Delay0 00:08:43.501 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.501 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.501 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.501 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.501 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.501 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=961813 00:08:43.501 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:43.501 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:43.759 [2024-12-08 06:12:33.624834] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:45.669 06:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:45.669 06:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.669 06:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 starting I/O failed: -6 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Write completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 starting I/O failed: -6 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 starting I/O failed: -6 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 starting I/O failed: -6 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Write completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 starting I/O failed: -6 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 starting I/O failed: -6 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Write completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 starting I/O failed: -6 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Write completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 starting I/O failed: -6 00:08:45.930 Write completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 starting I/O failed: -6 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 starting I/O failed: -6 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Write completed with error (sct=0, sc=8) 00:08:45.930 Write completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 starting I/O failed: -6 00:08:45.930 Write completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 [2024-12-08 06:12:35.876519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b4a0 is same with the state(6) to be set 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Write completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 starting I/O failed: -6 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Write completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Write completed with error (sct=0, sc=8) 00:08:45.930 starting I/O failed: -6 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Write completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 starting I/O failed: -6 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Write completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 starting I/O failed: -6 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Read completed with error (sct=0, sc=8) 00:08:45.930 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 starting I/O failed: -6 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 starting I/O failed: -6 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 starting I/O failed: -6 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 starting I/O failed: -6 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 starting I/O failed: -6 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 starting I/O failed: -6 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 starting I/O failed: -6 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 [2024-12-08 06:12:35.877375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1198000c40 is same with the state(6) to be set 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:45.931 Write completed with error (sct=0, sc=8) 00:08:45.931 Read completed with error (sct=0, sc=8) 00:08:46.922 [2024-12-08 06:12:36.843553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c9b0 is same with the state(6) to be set 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 [2024-12-08 06:12:36.878862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b680 is same with the state(6) to be set 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 [2024-12-08 06:12:36.879072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b2c0 is same with the state(6) to be set 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 [2024-12-08 06:12:36.879505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f119800d020 is same with the state(6) to be set 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 Write completed with error (sct=0, sc=8) 00:08:46.922 Read completed with error (sct=0, sc=8) 00:08:46.922 [2024-12-08 06:12:36.880063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f119800d7e0 is same with the state(6) to be set 00:08:46.922 Initializing NVMe Controllers 00:08:46.922 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:46.922 Controller IO queue size 128, less than required. 00:08:46.922 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:46.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:46.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:46.922 Initialization complete. Launching workers. 00:08:46.922 ======================================================== 00:08:46.922 Latency(us) 00:08:46.922 Device Information : IOPS MiB/s Average min max 00:08:46.922 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.27 0.08 901394.39 797.70 1012078.08 00:08:46.922 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 168.76 0.08 896598.22 404.55 1012100.08 00:08:46.922 ======================================================== 00:08:46.922 Total : 336.03 0.16 898985.68 404.55 1012100.08 00:08:46.922 00:08:46.922 [2024-12-08 06:12:36.880752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f0c9b0 (9): Bad file descriptor 00:08:46.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:46.922 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.922 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:46.922 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 961813 00:08:46.922 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 961813 00:08:47.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (961813) - No such process 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 961813 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 961813 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 961813 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.524 [2024-12-08 06:12:37.402061] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=962307 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 962307 00:08:47.524 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:47.524 [2024-12-08 06:12:37.468872] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:48.093 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:48.093 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 962307 00:08:48.093 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:48.351 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:48.351 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 962307 00:08:48.351 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:48.919 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:48.919 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 962307 00:08:48.919 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:49.488 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:49.488 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 962307 00:08:49.488 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:50.057 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:50.057 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 962307 00:08:50.057 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:50.317 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:50.317 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 962307 00:08:50.317 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:50.884 Initializing NVMe Controllers 00:08:50.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:50.884 Controller IO queue size 128, less than required. 00:08:50.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:50.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:50.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:50.884 Initialization complete. Launching workers. 00:08:50.884 ======================================================== 00:08:50.884 Latency(us) 00:08:50.884 Device Information : IOPS MiB/s Average min max 00:08:50.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004500.27 1000145.29 1013521.49 00:08:50.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004979.09 1000171.01 1010658.03 00:08:50.884 ======================================================== 00:08:50.884 Total : 256.00 0.12 1004739.68 1000145.29 1013521.49 00:08:50.884 00:08:50.884 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:50.884 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 962307 00:08:50.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (962307) - No such process 00:08:50.884 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 962307 00:08:50.884 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:50.884 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:50.884 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:50.884 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:50.884 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.884 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:50.884 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.884 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.884 rmmod nvme_tcp 00:08:50.884 rmmod nvme_fabrics 00:08:50.884 rmmod nvme_keyring 00:08:50.884 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.884 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:50.885 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:50.885 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 961787 ']' 00:08:50.885 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 961787 00:08:50.885 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 961787 ']' 00:08:50.885 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 961787 00:08:50.885 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:50.885 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.885 06:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 961787 00:08:51.143 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.143 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.143 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 961787' 00:08:51.143 killing process with pid 961787 00:08:51.143 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 961787 00:08:51.143 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 961787 00:08:51.143 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:51.143 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:51.143 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:51.143 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:51.143 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:51.143 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:51.143 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:51.143 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.143 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:51.143 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.143 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.143 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.678 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:53.678 00:08:53.678 real 0m12.544s 00:08:53.678 user 0m28.253s 00:08:53.678 sys 0m3.074s 00:08:53.678 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.678 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.678 ************************************ 00:08:53.679 END TEST nvmf_delete_subsystem 00:08:53.679 ************************************ 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.679 ************************************ 00:08:53.679 START TEST nvmf_host_management 00:08:53.679 ************************************ 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:53.679 * Looking for test storage... 00:08:53.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:53.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.679 --rc genhtml_branch_coverage=1 00:08:53.679 --rc genhtml_function_coverage=1 00:08:53.679 --rc genhtml_legend=1 00:08:53.679 --rc geninfo_all_blocks=1 00:08:53.679 --rc geninfo_unexecuted_blocks=1 00:08:53.679 00:08:53.679 ' 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:53.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.679 --rc genhtml_branch_coverage=1 00:08:53.679 --rc genhtml_function_coverage=1 00:08:53.679 --rc genhtml_legend=1 00:08:53.679 --rc geninfo_all_blocks=1 00:08:53.679 --rc geninfo_unexecuted_blocks=1 00:08:53.679 00:08:53.679 ' 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:53.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.679 --rc genhtml_branch_coverage=1 00:08:53.679 --rc genhtml_function_coverage=1 00:08:53.679 --rc genhtml_legend=1 00:08:53.679 --rc geninfo_all_blocks=1 00:08:53.679 --rc geninfo_unexecuted_blocks=1 00:08:53.679 00:08:53.679 ' 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:53.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.679 --rc genhtml_branch_coverage=1 00:08:53.679 --rc genhtml_function_coverage=1 00:08:53.679 --rc genhtml_legend=1 00:08:53.679 --rc geninfo_all_blocks=1 00:08:53.679 --rc geninfo_unexecuted_blocks=1 00:08:53.679 00:08:53.679 ' 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.679 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:53.680 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.582 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:55.582 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:55.582 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:55.582 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:55.582 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:55.583 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:55.583 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:55.583 Found net devices under 0000:84:00.0: cvl_0_0 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:55.583 Found net devices under 0000:84:00.1: cvl_0_1 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:55.583 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:55.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:08:55.841 00:08:55.841 --- 10.0.0.2 ping statistics --- 00:08:55.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.841 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:55.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:08:55.841 00:08:55.841 --- 10.0.0.1 ping statistics --- 00:08:55.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.841 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=964717 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 964717 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 964717 ']' 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.841 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.841 [2024-12-08 06:12:45.854795] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:08:55.841 [2024-12-08 06:12:45.854866] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.841 [2024-12-08 06:12:45.922951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.099 [2024-12-08 06:12:45.978201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.099 [2024-12-08 06:12:45.978259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.099 [2024-12-08 06:12:45.978283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.099 [2024-12-08 06:12:45.978293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.099 [2024-12-08 06:12:45.978303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.099 [2024-12-08 06:12:45.980103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.099 [2024-12-08 06:12:45.980166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.099 [2024-12-08 06:12:45.980234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:56.099 [2024-12-08 06:12:45.980237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.099 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.099 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:56.099 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:56.099 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:56.099 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.100 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.100 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:56.100 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.100 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.100 [2024-12-08 06:12:46.127235] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.100 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.100 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:56.100 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.100 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.100 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:56.100 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:56.100 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:56.100 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.100 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.100 Malloc0 00:08:56.100 [2024-12-08 06:12:46.207964] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=964760 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 964760 /var/tmp/bdevperf.sock 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 964760 ']' 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:56.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:56.358 { 00:08:56.358 "params": { 00:08:56.358 "name": "Nvme$subsystem", 00:08:56.358 "trtype": "$TEST_TRANSPORT", 00:08:56.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:56.358 "adrfam": "ipv4", 00:08:56.358 "trsvcid": "$NVMF_PORT", 00:08:56.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:56.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:56.358 "hdgst": ${hdgst:-false}, 00:08:56.358 "ddgst": ${ddgst:-false} 00:08:56.358 }, 00:08:56.358 "method": "bdev_nvme_attach_controller" 00:08:56.358 } 00:08:56.358 EOF 00:08:56.358 )") 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:56.358 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:56.358 "params": { 00:08:56.358 "name": "Nvme0", 00:08:56.358 "trtype": "tcp", 00:08:56.358 "traddr": "10.0.0.2", 00:08:56.358 "adrfam": "ipv4", 00:08:56.358 "trsvcid": "4420", 00:08:56.358 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:56.358 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:56.358 "hdgst": false, 00:08:56.358 "ddgst": false 00:08:56.358 }, 00:08:56.358 "method": "bdev_nvme_attach_controller" 00:08:56.358 }' 00:08:56.358 [2024-12-08 06:12:46.291160] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:08:56.359 [2024-12-08 06:12:46.291246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid964760 ] 00:08:56.359 [2024-12-08 06:12:46.360979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.359 [2024-12-08 06:12:46.420648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.616 Running I/O for 10 seconds... 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:56.616 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:56.873 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:56.873 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:56.873 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:56.873 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:56.873 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.873 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.873 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.134 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:08:57.134 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:08:57.134 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:57.134 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:57.134 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:57.134 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:57.134 06:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.134 06:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:57.134 [2024-12-08 06:12:47.006614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cc70 is same with the state(6) to be set 00:08:57.134 [2024-12-08 06:12:47.006785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125cc70 is same with the state(6) to be set 00:08:57.134 06:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.134 06:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:57.134 06:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.134 06:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:57.134 [2024-12-08 06:12:47.015806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:57.134 [2024-12-08 06:12:47.015850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.134 [2024-12-08 06:12:47.015870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:57.134 [2024-12-08 06:12:47.015886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.134 [2024-12-08 06:12:47.015901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:57.134 [2024-12-08 06:12:47.015916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.134 [2024-12-08 06:12:47.015943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:57.134 [2024-12-08 06:12:47.015958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.134 [2024-12-08 06:12:47.015973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74bc60 is same with the state(6) to be set 00:08:57.134 06:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.134 06:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:57.134 [2024-12-08 06:12:47.024572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.134 [2024-12-08 06:12:47.024600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.134 [2024-12-08 06:12:47.024628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.134 [2024-12-08 06:12:47.024644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.134 [2024-12-08 06:12:47.024672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.134 [2024-12-08 06:12:47.024686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.134 [2024-12-08 06:12:47.024702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.134 [2024-12-08 06:12:47.024716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.134 [2024-12-08 06:12:47.024760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.134 [2024-12-08 06:12:47.024776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.134 [2024-12-08 06:12:47.024792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.134 [2024-12-08 06:12:47.024807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.134 [2024-12-08 06:12:47.024823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.134 [2024-12-08 06:12:47.024838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.134 [2024-12-08 06:12:47.024854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.134 [2024-12-08 06:12:47.024868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.134 [2024-12-08 06:12:47.024885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.134 [2024-12-08 06:12:47.024900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.134 [2024-12-08 06:12:47.024916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.134 [2024-12-08 06:12:47.024931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.024946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.024971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.024988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.025976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.025991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.026007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.026022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.026054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.026068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.026084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.026098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.026113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.026128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.026143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.026158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.135 [2024-12-08 06:12:47.026177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.135 [2024-12-08 06:12:47.026191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.136 [2024-12-08 06:12:47.026207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.136 [2024-12-08 06:12:47.026221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.136 [2024-12-08 06:12:47.026237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.136 [2024-12-08 06:12:47.026251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.136 [2024-12-08 06:12:47.026267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.136 [2024-12-08 06:12:47.026281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.136 [2024-12-08 06:12:47.026296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.136 [2024-12-08 06:12:47.026310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.136 [2024-12-08 06:12:47.026326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.136 [2024-12-08 06:12:47.026340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.136 [2024-12-08 06:12:47.026355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.136 [2024-12-08 06:12:47.026370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.136 [2024-12-08 06:12:47.026385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.136 [2024-12-08 06:12:47.026400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.136 [2024-12-08 06:12:47.026415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.136 [2024-12-08 06:12:47.026429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.136 [2024-12-08 06:12:47.026444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.136 [2024-12-08 06:12:47.026459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.136 [2024-12-08 06:12:47.026476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.136 [2024-12-08 06:12:47.026491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.136 [2024-12-08 06:12:47.026506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.136 [2024-12-08 06:12:47.026521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.136 [2024-12-08 06:12:47.026536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.136 [2024-12-08 06:12:47.026555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.136 [2024-12-08 06:12:47.026571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.136 [2024-12-08 06:12:47.026585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.136 [2024-12-08 06:12:47.026601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.136 [2024-12-08 06:12:47.026615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.136 [2024-12-08 06:12:47.026630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:57.136 [2024-12-08 06:12:47.026645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:57.136 [2024-12-08 06:12:47.026790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74bc60 (9): Bad file descriptor 00:08:57.136 [2024-12-08 06:12:47.027898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:57.136 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:57.136 00:08:57.136 Latency(us) 00:08:57.136 [2024-12-08T05:12:47.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.136 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:57.136 Job: Nvme0n1 ended in about 0.42 seconds with error 00:08:57.136 Verification LBA range: start 0x0 length 0x400 00:08:57.136 Nvme0n1 : 0.42 1533.01 95.81 153.30 0.00 36910.15 2585.03 34175.81 00:08:57.136 [2024-12-08T05:12:47.255Z] =================================================================================================================== 00:08:57.136 [2024-12-08T05:12:47.255Z] Total : 1533.01 95.81 153.30 0.00 36910.15 2585.03 34175.81 00:08:57.136 [2024-12-08 06:12:47.030765] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:57.136 [2024-12-08 06:12:47.043426] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:58.073 06:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 964760 00:08:58.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (964760) - No such process 00:08:58.073 06:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:58.073 06:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:58.073 06:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:58.073 06:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:58.073 06:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:58.073 06:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:58.074 06:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:58.074 06:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:58.074 { 00:08:58.074 "params": { 00:08:58.074 "name": "Nvme$subsystem", 00:08:58.074 "trtype": "$TEST_TRANSPORT", 00:08:58.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:58.074 "adrfam": "ipv4", 00:08:58.074 "trsvcid": "$NVMF_PORT", 00:08:58.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:58.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:58.074 "hdgst": ${hdgst:-false}, 00:08:58.074 "ddgst": ${ddgst:-false} 00:08:58.074 }, 00:08:58.074 "method": "bdev_nvme_attach_controller" 00:08:58.074 } 00:08:58.074 EOF 00:08:58.074 )") 00:08:58.074 06:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:58.074 06:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:58.074 06:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:58.074 06:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:58.074 "params": { 00:08:58.074 "name": "Nvme0", 00:08:58.074 "trtype": "tcp", 00:08:58.074 "traddr": "10.0.0.2", 00:08:58.074 "adrfam": "ipv4", 00:08:58.074 "trsvcid": "4420", 00:08:58.074 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:58.074 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:58.074 "hdgst": false, 00:08:58.074 "ddgst": false 00:08:58.074 }, 00:08:58.074 "method": "bdev_nvme_attach_controller" 00:08:58.074 }' 00:08:58.074 [2024-12-08 06:12:48.071375] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:08:58.074 [2024-12-08 06:12:48.071459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid965040 ] 00:08:58.074 [2024-12-08 06:12:48.140278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.333 [2024-12-08 06:12:48.200267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.333 Running I/O for 1 seconds... 00:08:59.712 1536.00 IOPS, 96.00 MiB/s 00:08:59.712 Latency(us) 00:08:59.712 [2024-12-08T05:12:49.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.712 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:59.712 Verification LBA range: start 0x0 length 0x400 00:08:59.712 Nvme0n1 : 1.06 1514.21 94.64 0.00 0.00 40103.33 11845.03 51263.72 00:08:59.712 [2024-12-08T05:12:49.831Z] =================================================================================================================== 00:08:59.712 [2024-12-08T05:12:49.831Z] Total : 1514.21 94.64 0.00 0.00 40103.33 11845.03 51263.72 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:59.712 rmmod nvme_tcp 00:08:59.712 rmmod nvme_fabrics 00:08:59.712 rmmod nvme_keyring 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 964717 ']' 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 964717 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 964717 ']' 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 964717 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 964717 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 964717' 00:08:59.712 killing process with pid 964717 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 964717 00:08:59.712 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 964717 00:08:59.971 [2024-12-08 06:12:50.047788] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:59.971 06:12:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:59.971 06:12:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:59.971 06:12:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:59.971 06:12:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:59.971 06:12:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:59.971 06:12:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:59.971 06:12:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:59.971 06:12:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:59.971 06:12:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:59.971 06:12:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.971 06:12:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.971 06:12:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:02.513 00:09:02.513 real 0m8.780s 00:09:02.513 user 0m19.356s 00:09:02.513 sys 0m2.870s 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:02.513 ************************************ 00:09:02.513 END TEST nvmf_host_management 00:09:02.513 ************************************ 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:02.513 ************************************ 00:09:02.513 START TEST nvmf_lvol 00:09:02.513 ************************************ 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:02.513 * Looking for test storage... 00:09:02.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:02.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.513 --rc genhtml_branch_coverage=1 00:09:02.513 --rc genhtml_function_coverage=1 00:09:02.513 --rc genhtml_legend=1 00:09:02.513 --rc geninfo_all_blocks=1 00:09:02.513 --rc geninfo_unexecuted_blocks=1 00:09:02.513 00:09:02.513 ' 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:02.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.513 --rc genhtml_branch_coverage=1 00:09:02.513 --rc genhtml_function_coverage=1 00:09:02.513 --rc genhtml_legend=1 00:09:02.513 --rc geninfo_all_blocks=1 00:09:02.513 --rc geninfo_unexecuted_blocks=1 00:09:02.513 00:09:02.513 ' 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:02.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.513 --rc genhtml_branch_coverage=1 00:09:02.513 --rc genhtml_function_coverage=1 00:09:02.513 --rc genhtml_legend=1 00:09:02.513 --rc geninfo_all_blocks=1 00:09:02.513 --rc geninfo_unexecuted_blocks=1 00:09:02.513 00:09:02.513 ' 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:02.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.513 --rc genhtml_branch_coverage=1 00:09:02.513 --rc genhtml_function_coverage=1 00:09:02.513 --rc genhtml_legend=1 00:09:02.513 --rc geninfo_all_blocks=1 00:09:02.513 --rc geninfo_unexecuted_blocks=1 00:09:02.513 00:09:02.513 ' 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.513 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:02.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:02.514 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:04.421 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:04.421 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:04.421 Found net devices under 0000:84:00.0: cvl_0_0 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:04.421 Found net devices under 0000:84:00.1: cvl_0_1 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:04.421 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:04.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:09:04.680 00:09:04.680 --- 10.0.0.2 ping statistics --- 00:09:04.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.680 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:04.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:09:04.680 00:09:04.680 --- 10.0.0.1 ping statistics --- 00:09:04.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.680 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=967261 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 967261 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 967261 ']' 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.680 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:04.680 [2024-12-08 06:12:54.693228] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:09:04.680 [2024-12-08 06:12:54.693317] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.680 [2024-12-08 06:12:54.766346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:04.939 [2024-12-08 06:12:54.825967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.939 [2024-12-08 06:12:54.826015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.939 [2024-12-08 06:12:54.826039] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.939 [2024-12-08 06:12:54.826050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.939 [2024-12-08 06:12:54.826060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.939 [2024-12-08 06:12:54.827643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.939 [2024-12-08 06:12:54.827701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.939 [2024-12-08 06:12:54.827705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.939 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.939 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:04.939 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:04.939 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:04.939 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:04.939 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.939 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:05.196 [2024-12-08 06:12:55.216643] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.196 06:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:05.453 06:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:05.453 06:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:05.710 06:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:05.710 06:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:06.274 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:06.274 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2b94c7d4-88e4-4e60-bcb5-c69ab07c0bc6 00:09:06.274 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2b94c7d4-88e4-4e60-bcb5-c69ab07c0bc6 lvol 20 00:09:06.532 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e1668048-a023-4259-b9c4-c22200060dbb 00:09:06.532 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:07.098 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e1668048-a023-4259-b9c4-c22200060dbb 00:09:07.099 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:07.357 [2024-12-08 06:12:57.431050] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.357 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:07.616 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=967574 00:09:07.616 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:07.616 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:08.991 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e1668048-a023-4259-b9c4-c22200060dbb MY_SNAPSHOT 00:09:08.991 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7111c508-2ee5-4f5a-857d-95c6719d5ad3 00:09:08.991 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e1668048-a023-4259-b9c4-c22200060dbb 30 00:09:09.558 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7111c508-2ee5-4f5a-857d-95c6719d5ad3 MY_CLONE 00:09:09.816 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b59b3542-5cc2-45ff-94ed-c4e75191f4c4 00:09:09.816 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b59b3542-5cc2-45ff-94ed-c4e75191f4c4 00:09:10.750 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 967574 00:09:18.860 Initializing NVMe Controllers 00:09:18.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:18.860 Controller IO queue size 128, less than required. 00:09:18.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:18.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:18.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:18.860 Initialization complete. Launching workers. 00:09:18.860 ======================================================== 00:09:18.860 Latency(us) 00:09:18.860 Device Information : IOPS MiB/s Average min max 00:09:18.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10365.60 40.49 12350.40 1479.11 69246.95 00:09:18.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10245.70 40.02 12498.16 2053.02 61583.81 00:09:18.860 ======================================================== 00:09:18.860 Total : 20611.29 80.51 12423.85 1479.11 69246.95 00:09:18.860 00:09:18.860 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:18.860 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e1668048-a023-4259-b9c4-c22200060dbb 00:09:18.860 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2b94c7d4-88e4-4e60-bcb5-c69ab07c0bc6 00:09:18.860 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:18.860 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:18.860 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:18.860 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:18.860 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:18.860 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:18.860 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:18.860 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:18.860 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:18.860 rmmod nvme_tcp 00:09:18.860 rmmod nvme_fabrics 00:09:18.860 rmmod nvme_keyring 00:09:18.860 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:18.860 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:18.860 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:18.860 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 967261 ']' 00:09:18.860 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 967261 00:09:18.860 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 967261 ']' 00:09:18.861 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 967261 00:09:18.861 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:18.861 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.861 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 967261 00:09:19.119 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.119 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.119 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 967261' 00:09:19.119 killing process with pid 967261 00:09:19.119 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 967261 00:09:19.119 06:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 967261 00:09:19.378 06:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.378 06:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.378 06:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.378 06:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:19.378 06:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:19.378 06:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.378 06:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.378 06:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.378 06:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.378 06:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.378 06:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.378 06:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.284 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:21.284 00:09:21.284 real 0m19.129s 00:09:21.284 user 1m5.334s 00:09:21.284 sys 0m5.514s 00:09:21.284 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.284 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:21.284 ************************************ 00:09:21.284 END TEST nvmf_lvol 00:09:21.284 ************************************ 00:09:21.284 06:13:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:21.284 06:13:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:21.284 06:13:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.284 06:13:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.284 ************************************ 00:09:21.284 START TEST nvmf_lvs_grow 00:09:21.284 ************************************ 00:09:21.284 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:21.284 * Looking for test storage... 00:09:21.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:21.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.544 --rc genhtml_branch_coverage=1 00:09:21.544 --rc genhtml_function_coverage=1 00:09:21.544 --rc genhtml_legend=1 00:09:21.544 --rc geninfo_all_blocks=1 00:09:21.544 --rc geninfo_unexecuted_blocks=1 00:09:21.544 00:09:21.544 ' 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:21.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.544 --rc genhtml_branch_coverage=1 00:09:21.544 --rc genhtml_function_coverage=1 00:09:21.544 --rc genhtml_legend=1 00:09:21.544 --rc geninfo_all_blocks=1 00:09:21.544 --rc geninfo_unexecuted_blocks=1 00:09:21.544 00:09:21.544 ' 00:09:21.544 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:21.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.544 --rc genhtml_branch_coverage=1 00:09:21.544 --rc genhtml_function_coverage=1 00:09:21.544 --rc genhtml_legend=1 00:09:21.544 --rc geninfo_all_blocks=1 00:09:21.544 --rc geninfo_unexecuted_blocks=1 00:09:21.544 00:09:21.544 ' 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:21.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.545 --rc genhtml_branch_coverage=1 00:09:21.545 --rc genhtml_function_coverage=1 00:09:21.545 --rc genhtml_legend=1 00:09:21.545 --rc geninfo_all_blocks=1 00:09:21.545 --rc geninfo_unexecuted_blocks=1 00:09:21.545 00:09:21.545 ' 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:21.545 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:24.083 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:24.083 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:24.083 Found net devices under 0000:84:00.0: cvl_0_0 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:24.083 Found net devices under 0000:84:00.1: cvl_0_1 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:24.083 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:24.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:09:24.084 00:09:24.084 --- 10.0.0.2 ping statistics --- 00:09:24.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.084 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:09:24.084 00:09:24.084 --- 10.0.0.1 ping statistics --- 00:09:24.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.084 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=970992 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 970992 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 970992 ']' 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.084 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.084 [2024-12-08 06:13:13.831290] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:09:24.084 [2024-12-08 06:13:13.831383] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.084 [2024-12-08 06:13:13.902063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.084 [2024-12-08 06:13:13.955118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.084 [2024-12-08 06:13:13.955196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.084 [2024-12-08 06:13:13.955221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.084 [2024-12-08 06:13:13.955231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.084 [2024-12-08 06:13:13.955240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.084 [2024-12-08 06:13:13.955919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.084 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.084 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:24.084 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:24.084 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:24.084 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.084 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.084 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:24.347 [2024-12-08 06:13:14.333470] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.347 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:24.347 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.347 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.347 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.347 ************************************ 00:09:24.347 START TEST lvs_grow_clean 00:09:24.347 ************************************ 00:09:24.347 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:24.347 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:24.347 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:24.347 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:24.347 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:24.347 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:24.347 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:24.347 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:24.348 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:24.348 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.605 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:24.605 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:24.864 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=bfe0f53e-fc84-4e72-8bc0-226f895085ad 00:09:24.864 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfe0f53e-fc84-4e72-8bc0-226f895085ad 00:09:24.864 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:25.128 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:25.128 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:25.128 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bfe0f53e-fc84-4e72-8bc0-226f895085ad lvol 150 00:09:25.387 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a9691ff4-d557-452d-8408-73d12af7f23c 00:09:25.387 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.387 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:25.686 [2024-12-08 06:13:15.748242] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:25.686 [2024-12-08 06:13:15.748333] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:25.686 true 00:09:25.686 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfe0f53e-fc84-4e72-8bc0-226f895085ad 00:09:25.686 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:25.960 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:25.960 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:26.217 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a9691ff4-d557-452d-8408-73d12af7f23c 00:09:26.476 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:26.738 [2024-12-08 06:13:16.827555] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.738 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:26.997 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=971449 00:09:26.997 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:27.256 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:27.256 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 971449 /var/tmp/bdevperf.sock 00:09:27.256 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 971449 ']' 00:09:27.256 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:27.256 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.256 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:27.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:27.256 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.256 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:27.256 [2024-12-08 06:13:17.160545] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:09:27.256 [2024-12-08 06:13:17.160627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid971449 ] 00:09:27.256 [2024-12-08 06:13:17.229560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.256 [2024-12-08 06:13:17.287401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.514 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.514 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:27.514 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:27.772 Nvme0n1 00:09:27.772 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:28.030 [ 00:09:28.030 { 00:09:28.030 "name": "Nvme0n1", 00:09:28.030 "aliases": [ 00:09:28.030 "a9691ff4-d557-452d-8408-73d12af7f23c" 00:09:28.030 ], 00:09:28.030 "product_name": "NVMe disk", 00:09:28.030 "block_size": 4096, 00:09:28.030 "num_blocks": 38912, 00:09:28.030 "uuid": "a9691ff4-d557-452d-8408-73d12af7f23c", 00:09:28.030 "numa_id": 1, 00:09:28.030 "assigned_rate_limits": { 00:09:28.030 "rw_ios_per_sec": 0, 00:09:28.030 "rw_mbytes_per_sec": 0, 00:09:28.030 "r_mbytes_per_sec": 0, 00:09:28.030 "w_mbytes_per_sec": 0 00:09:28.030 }, 00:09:28.030 "claimed": false, 00:09:28.030 "zoned": false, 00:09:28.030 "supported_io_types": { 00:09:28.030 "read": true, 00:09:28.030 "write": true, 00:09:28.030 "unmap": true, 00:09:28.030 "flush": true, 00:09:28.030 "reset": true, 00:09:28.030 "nvme_admin": true, 00:09:28.030 "nvme_io": true, 00:09:28.030 "nvme_io_md": false, 00:09:28.030 "write_zeroes": true, 00:09:28.030 "zcopy": false, 00:09:28.030 "get_zone_info": false, 00:09:28.030 "zone_management": false, 00:09:28.030 "zone_append": false, 00:09:28.030 "compare": true, 00:09:28.030 "compare_and_write": true, 00:09:28.030 "abort": true, 00:09:28.030 "seek_hole": false, 00:09:28.030 "seek_data": false, 00:09:28.030 "copy": true, 00:09:28.030 "nvme_iov_md": false 00:09:28.030 }, 00:09:28.030 "memory_domains": [ 00:09:28.030 { 00:09:28.030 "dma_device_id": "system", 00:09:28.030 "dma_device_type": 1 00:09:28.030 } 00:09:28.030 ], 00:09:28.030 "driver_specific": { 00:09:28.030 "nvme": [ 00:09:28.030 { 00:09:28.030 "trid": { 00:09:28.030 "trtype": "TCP", 00:09:28.030 "adrfam": "IPv4", 00:09:28.030 "traddr": "10.0.0.2", 00:09:28.030 "trsvcid": "4420", 00:09:28.030 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:28.030 }, 00:09:28.030 "ctrlr_data": { 00:09:28.030 "cntlid": 1, 00:09:28.030 "vendor_id": "0x8086", 00:09:28.030 "model_number": "SPDK bdev Controller", 00:09:28.030 "serial_number": "SPDK0", 00:09:28.030 "firmware_revision": "25.01", 00:09:28.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:28.030 "oacs": { 00:09:28.030 "security": 0, 00:09:28.030 "format": 0, 00:09:28.030 "firmware": 0, 00:09:28.030 "ns_manage": 0 00:09:28.030 }, 00:09:28.030 "multi_ctrlr": true, 00:09:28.030 "ana_reporting": false 00:09:28.030 }, 00:09:28.030 "vs": { 00:09:28.030 "nvme_version": "1.3" 00:09:28.030 }, 00:09:28.031 "ns_data": { 00:09:28.031 "id": 1, 00:09:28.031 "can_share": true 00:09:28.031 } 00:09:28.031 } 00:09:28.031 ], 00:09:28.031 "mp_policy": "active_passive" 00:09:28.031 } 00:09:28.031 } 00:09:28.031 ] 00:09:28.031 06:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=971464 00:09:28.031 06:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:28.031 06:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:28.288 Running I/O for 10 seconds... 00:09:29.224 Latency(us) 00:09:29.224 [2024-12-08T05:13:19.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.224 Nvme0n1 : 1.00 16638.00 64.99 0.00 0.00 0.00 0.00 0.00 00:09:29.224 [2024-12-08T05:13:19.343Z] =================================================================================================================== 00:09:29.224 [2024-12-08T05:13:19.343Z] Total : 16638.00 64.99 0.00 0.00 0.00 0.00 0.00 00:09:29.224 00:09:30.161 06:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bfe0f53e-fc84-4e72-8bc0-226f895085ad 00:09:30.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.161 Nvme0n1 : 2.00 16708.50 65.27 0.00 0.00 0.00 0.00 0.00 00:09:30.161 [2024-12-08T05:13:20.280Z] =================================================================================================================== 00:09:30.161 [2024-12-08T05:13:20.280Z] Total : 16708.50 65.27 0.00 0.00 0.00 0.00 0.00 00:09:30.161 00:09:30.419 true 00:09:30.419 06:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfe0f53e-fc84-4e72-8bc0-226f895085ad 00:09:30.419 06:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:30.679 06:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:30.679 06:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:30.679 06:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 971464 00:09:31.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.249 Nvme0n1 : 3.00 16664.00 65.09 0.00 0.00 0.00 0.00 0.00 00:09:31.249 [2024-12-08T05:13:21.368Z] =================================================================================================================== 00:09:31.249 [2024-12-08T05:13:21.368Z] Total : 16664.00 65.09 0.00 0.00 0.00 0.00 0.00 00:09:31.249 00:09:32.188 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.188 Nvme0n1 : 4.00 16769.50 65.51 0.00 0.00 0.00 0.00 0.00 00:09:32.188 [2024-12-08T05:13:22.307Z] =================================================================================================================== 00:09:32.188 [2024-12-08T05:13:22.307Z] Total : 16769.50 65.51 0.00 0.00 0.00 0.00 0.00 00:09:32.188 00:09:33.127 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.127 Nvme0n1 : 5.00 16847.00 65.81 0.00 0.00 0.00 0.00 0.00 00:09:33.127 [2024-12-08T05:13:23.246Z] =================================================================================================================== 00:09:33.127 [2024-12-08T05:13:23.246Z] Total : 16847.00 65.81 0.00 0.00 0.00 0.00 0.00 00:09:33.127 00:09:34.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.062 Nvme0n1 : 6.00 16956.33 66.24 0.00 0.00 0.00 0.00 0.00 00:09:34.062 [2024-12-08T05:13:24.181Z] =================================================================================================================== 00:09:34.062 [2024-12-08T05:13:24.181Z] Total : 16956.33 66.24 0.00 0.00 0.00 0.00 0.00 00:09:34.062 00:09:35.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.439 Nvme0n1 : 7.00 17012.00 66.45 0.00 0.00 0.00 0.00 0.00 00:09:35.439 [2024-12-08T05:13:25.558Z] =================================================================================================================== 00:09:35.439 [2024-12-08T05:13:25.558Z] Total : 17012.00 66.45 0.00 0.00 0.00 0.00 0.00 00:09:35.439 00:09:36.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.373 Nvme0n1 : 8.00 17076.62 66.71 0.00 0.00 0.00 0.00 0.00 00:09:36.373 [2024-12-08T05:13:26.492Z] =================================================================================================================== 00:09:36.373 [2024-12-08T05:13:26.492Z] Total : 17076.62 66.71 0.00 0.00 0.00 0.00 0.00 00:09:36.373 00:09:37.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.311 Nvme0n1 : 9.00 17119.67 66.87 0.00 0.00 0.00 0.00 0.00 00:09:37.311 [2024-12-08T05:13:27.430Z] =================================================================================================================== 00:09:37.311 [2024-12-08T05:13:27.430Z] Total : 17119.67 66.87 0.00 0.00 0.00 0.00 0.00 00:09:37.311 00:09:38.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.250 Nvme0n1 : 10.00 17154.10 67.01 0.00 0.00 0.00 0.00 0.00 00:09:38.250 [2024-12-08T05:13:28.369Z] =================================================================================================================== 00:09:38.250 [2024-12-08T05:13:28.369Z] Total : 17154.10 67.01 0.00 0.00 0.00 0.00 0.00 00:09:38.250 00:09:38.250 00:09:38.250 Latency(us) 00:09:38.250 [2024-12-08T05:13:28.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.250 Nvme0n1 : 10.01 17155.91 67.02 0.00 0.00 7457.01 4369.07 15146.10 00:09:38.250 [2024-12-08T05:13:28.369Z] =================================================================================================================== 00:09:38.250 [2024-12-08T05:13:28.369Z] Total : 17155.91 67.02 0.00 0.00 7457.01 4369.07 15146.10 00:09:38.250 { 00:09:38.250 "results": [ 00:09:38.250 { 00:09:38.250 "job": "Nvme0n1", 00:09:38.250 "core_mask": "0x2", 00:09:38.250 "workload": "randwrite", 00:09:38.250 "status": "finished", 00:09:38.250 "queue_depth": 128, 00:09:38.250 "io_size": 4096, 00:09:38.250 "runtime": 10.006407, 00:09:38.250 "iops": 17155.9082096101, 00:09:38.250 "mibps": 67.01526644378946, 00:09:38.250 "io_failed": 0, 00:09:38.250 "io_timeout": 0, 00:09:38.250 "avg_latency_us": 7457.011802359537, 00:09:38.250 "min_latency_us": 4369.066666666667, 00:09:38.250 "max_latency_us": 15146.097777777777 00:09:38.250 } 00:09:38.250 ], 00:09:38.250 "core_count": 1 00:09:38.250 } 00:09:38.250 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 971449 00:09:38.250 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 971449 ']' 00:09:38.250 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 971449 00:09:38.250 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:38.250 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.250 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 971449 00:09:38.250 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:38.250 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:38.250 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 971449' 00:09:38.250 killing process with pid 971449 00:09:38.250 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 971449 00:09:38.250 Received shutdown signal, test time was about 10.000000 seconds 00:09:38.250 00:09:38.250 Latency(us) 00:09:38.250 [2024-12-08T05:13:28.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.250 [2024-12-08T05:13:28.369Z] =================================================================================================================== 00:09:38.250 [2024-12-08T05:13:28.369Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:38.250 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 971449 00:09:38.508 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:38.771 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:39.033 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfe0f53e-fc84-4e72-8bc0-226f895085ad 00:09:39.033 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:39.292 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:39.292 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:39.292 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:39.552 [2024-12-08 06:13:29.530650] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:39.552 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfe0f53e-fc84-4e72-8bc0-226f895085ad 00:09:39.552 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:39.552 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfe0f53e-fc84-4e72-8bc0-226f895085ad 00:09:39.552 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.552 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.552 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.552 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.552 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.552 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.552 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.552 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:39.552 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfe0f53e-fc84-4e72-8bc0-226f895085ad 00:09:39.813 request: 00:09:39.813 { 00:09:39.813 "uuid": "bfe0f53e-fc84-4e72-8bc0-226f895085ad", 00:09:39.813 "method": "bdev_lvol_get_lvstores", 00:09:39.813 "req_id": 1 00:09:39.813 } 00:09:39.813 Got JSON-RPC error response 00:09:39.813 response: 00:09:39.813 { 00:09:39.813 "code": -19, 00:09:39.813 "message": "No such device" 00:09:39.813 } 00:09:39.813 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:39.813 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:39.813 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:39.813 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:39.813 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:40.072 aio_bdev 00:09:40.072 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a9691ff4-d557-452d-8408-73d12af7f23c 00:09:40.072 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a9691ff4-d557-452d-8408-73d12af7f23c 00:09:40.072 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.072 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:40.072 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.072 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.072 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:40.331 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a9691ff4-d557-452d-8408-73d12af7f23c -t 2000 00:09:40.591 [ 00:09:40.591 { 00:09:40.591 "name": "a9691ff4-d557-452d-8408-73d12af7f23c", 00:09:40.591 "aliases": [ 00:09:40.591 "lvs/lvol" 00:09:40.591 ], 00:09:40.591 "product_name": "Logical Volume", 00:09:40.591 "block_size": 4096, 00:09:40.591 "num_blocks": 38912, 00:09:40.591 "uuid": "a9691ff4-d557-452d-8408-73d12af7f23c", 00:09:40.591 "assigned_rate_limits": { 00:09:40.591 "rw_ios_per_sec": 0, 00:09:40.591 "rw_mbytes_per_sec": 0, 00:09:40.591 "r_mbytes_per_sec": 0, 00:09:40.591 "w_mbytes_per_sec": 0 00:09:40.591 }, 00:09:40.591 "claimed": false, 00:09:40.591 "zoned": false, 00:09:40.591 "supported_io_types": { 00:09:40.591 "read": true, 00:09:40.591 "write": true, 00:09:40.591 "unmap": true, 00:09:40.591 "flush": false, 00:09:40.591 "reset": true, 00:09:40.591 "nvme_admin": false, 00:09:40.591 "nvme_io": false, 00:09:40.591 "nvme_io_md": false, 00:09:40.591 "write_zeroes": true, 00:09:40.591 "zcopy": false, 00:09:40.591 "get_zone_info": false, 00:09:40.591 "zone_management": false, 00:09:40.591 "zone_append": false, 00:09:40.591 "compare": false, 00:09:40.591 "compare_and_write": false, 00:09:40.591 "abort": false, 00:09:40.591 "seek_hole": true, 00:09:40.591 "seek_data": true, 00:09:40.591 "copy": false, 00:09:40.591 "nvme_iov_md": false 00:09:40.591 }, 00:09:40.591 "driver_specific": { 00:09:40.591 "lvol": { 00:09:40.591 "lvol_store_uuid": "bfe0f53e-fc84-4e72-8bc0-226f895085ad", 00:09:40.591 "base_bdev": "aio_bdev", 00:09:40.591 "thin_provision": false, 00:09:40.591 "num_allocated_clusters": 38, 00:09:40.591 "snapshot": false, 00:09:40.591 "clone": false, 00:09:40.591 "esnap_clone": false 00:09:40.591 } 00:09:40.591 } 00:09:40.591 } 00:09:40.591 ] 00:09:40.591 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:40.591 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfe0f53e-fc84-4e72-8bc0-226f895085ad 00:09:40.591 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:41.160 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:41.160 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bfe0f53e-fc84-4e72-8bc0-226f895085ad 00:09:41.160 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:41.160 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:41.160 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a9691ff4-d557-452d-8408-73d12af7f23c 00:09:41.419 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bfe0f53e-fc84-4e72-8bc0-226f895085ad 00:09:41.985 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:41.985 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:41.985 00:09:41.985 real 0m17.714s 00:09:41.985 user 0m17.293s 00:09:41.985 sys 0m1.861s 00:09:41.985 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.985 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:41.985 ************************************ 00:09:41.985 END TEST lvs_grow_clean 00:09:41.985 ************************************ 00:09:42.243 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:42.243 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:42.243 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.243 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:42.243 ************************************ 00:09:42.243 START TEST lvs_grow_dirty 00:09:42.243 ************************************ 00:09:42.243 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:42.243 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:42.243 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:42.243 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:42.243 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:42.243 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:42.243 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:42.243 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:42.243 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:42.243 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:42.503 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:42.503 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:42.762 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4513681b-274b-4046-8c73-b7abf59b3792 00:09:42.762 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4513681b-274b-4046-8c73-b7abf59b3792 00:09:42.762 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:43.021 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:43.021 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:43.021 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4513681b-274b-4046-8c73-b7abf59b3792 lvol 150 00:09:43.279 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a1e9d08a-5164-460f-9c8a-5f9391d431bb 00:09:43.280 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:43.280 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:43.537 [2024-12-08 06:13:33.504100] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:43.537 [2024-12-08 06:13:33.504190] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:43.537 true 00:09:43.537 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4513681b-274b-4046-8c73-b7abf59b3792 00:09:43.537 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:43.795 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:43.795 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:44.052 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a1e9d08a-5164-460f-9c8a-5f9391d431bb 00:09:44.310 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:44.568 [2024-12-08 06:13:34.571421] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.568 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:44.826 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=973524 00:09:44.826 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:44.826 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:44.826 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 973524 /var/tmp/bdevperf.sock 00:09:44.826 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 973524 ']' 00:09:44.826 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:44.826 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.826 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:44.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:44.826 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.826 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:44.826 [2024-12-08 06:13:34.892211] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:09:44.826 [2024-12-08 06:13:34.892286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid973524 ] 00:09:45.083 [2024-12-08 06:13:34.957602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.083 [2024-12-08 06:13:35.014259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.083 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.083 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:45.083 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:45.653 Nvme0n1 00:09:45.653 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:45.653 [ 00:09:45.653 { 00:09:45.653 "name": "Nvme0n1", 00:09:45.653 "aliases": [ 00:09:45.653 "a1e9d08a-5164-460f-9c8a-5f9391d431bb" 00:09:45.653 ], 00:09:45.653 "product_name": "NVMe disk", 00:09:45.653 "block_size": 4096, 00:09:45.653 "num_blocks": 38912, 00:09:45.653 "uuid": "a1e9d08a-5164-460f-9c8a-5f9391d431bb", 00:09:45.653 "numa_id": 1, 00:09:45.653 "assigned_rate_limits": { 00:09:45.653 "rw_ios_per_sec": 0, 00:09:45.653 "rw_mbytes_per_sec": 0, 00:09:45.653 "r_mbytes_per_sec": 0, 00:09:45.653 "w_mbytes_per_sec": 0 00:09:45.653 }, 00:09:45.653 "claimed": false, 00:09:45.653 "zoned": false, 00:09:45.653 "supported_io_types": { 00:09:45.653 "read": true, 00:09:45.653 "write": true, 00:09:45.653 "unmap": true, 00:09:45.653 "flush": true, 00:09:45.653 "reset": true, 00:09:45.653 "nvme_admin": true, 00:09:45.653 "nvme_io": true, 00:09:45.653 "nvme_io_md": false, 00:09:45.653 "write_zeroes": true, 00:09:45.653 "zcopy": false, 00:09:45.653 "get_zone_info": false, 00:09:45.653 "zone_management": false, 00:09:45.653 "zone_append": false, 00:09:45.653 "compare": true, 00:09:45.653 "compare_and_write": true, 00:09:45.653 "abort": true, 00:09:45.653 "seek_hole": false, 00:09:45.653 "seek_data": false, 00:09:45.653 "copy": true, 00:09:45.653 "nvme_iov_md": false 00:09:45.653 }, 00:09:45.653 "memory_domains": [ 00:09:45.653 { 00:09:45.653 "dma_device_id": "system", 00:09:45.653 "dma_device_type": 1 00:09:45.653 } 00:09:45.653 ], 00:09:45.653 "driver_specific": { 00:09:45.653 "nvme": [ 00:09:45.653 { 00:09:45.653 "trid": { 00:09:45.653 "trtype": "TCP", 00:09:45.653 "adrfam": "IPv4", 00:09:45.653 "traddr": "10.0.0.2", 00:09:45.653 "trsvcid": "4420", 00:09:45.653 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:45.653 }, 00:09:45.653 "ctrlr_data": { 00:09:45.653 "cntlid": 1, 00:09:45.653 "vendor_id": "0x8086", 00:09:45.653 "model_number": "SPDK bdev Controller", 00:09:45.653 "serial_number": "SPDK0", 00:09:45.653 "firmware_revision": "25.01", 00:09:45.653 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:45.653 "oacs": { 00:09:45.653 "security": 0, 00:09:45.653 "format": 0, 00:09:45.653 "firmware": 0, 00:09:45.653 "ns_manage": 0 00:09:45.653 }, 00:09:45.653 "multi_ctrlr": true, 00:09:45.653 "ana_reporting": false 00:09:45.653 }, 00:09:45.653 "vs": { 00:09:45.653 "nvme_version": "1.3" 00:09:45.653 }, 00:09:45.653 "ns_data": { 00:09:45.653 "id": 1, 00:09:45.653 "can_share": true 00:09:45.653 } 00:09:45.653 } 00:09:45.653 ], 00:09:45.653 "mp_policy": "active_passive" 00:09:45.653 } 00:09:45.653 } 00:09:45.653 ] 00:09:45.911 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=973651 00:09:45.911 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:45.911 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:45.911 Running I/O for 10 seconds... 00:09:46.845 Latency(us) 00:09:46.845 [2024-12-08T05:13:36.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.845 Nvme0n1 : 1.00 16585.00 64.79 0.00 0.00 0.00 0.00 0.00 00:09:46.845 [2024-12-08T05:13:36.964Z] =================================================================================================================== 00:09:46.845 [2024-12-08T05:13:36.964Z] Total : 16585.00 64.79 0.00 0.00 0.00 0.00 0.00 00:09:46.845 00:09:47.782 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4513681b-274b-4046-8c73-b7abf59b3792 00:09:47.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.782 Nvme0n1 : 2.00 16773.50 65.52 0.00 0.00 0.00 0.00 0.00 00:09:47.782 [2024-12-08T05:13:37.901Z] =================================================================================================================== 00:09:47.782 [2024-12-08T05:13:37.901Z] Total : 16773.50 65.52 0.00 0.00 0.00 0.00 0.00 00:09:47.782 00:09:48.040 true 00:09:48.040 06:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:48.040 06:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4513681b-274b-4046-8c73-b7abf59b3792 00:09:48.608 06:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:48.608 06:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:48.608 06:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 973651 00:09:48.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.868 Nvme0n1 : 3.00 16793.33 65.60 0.00 0.00 0.00 0.00 0.00 00:09:48.868 [2024-12-08T05:13:38.987Z] =================================================================================================================== 00:09:48.868 [2024-12-08T05:13:38.987Z] Total : 16793.33 65.60 0.00 0.00 0.00 0.00 0.00 00:09:48.868 00:09:49.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.805 Nvme0n1 : 4.00 16860.25 65.86 0.00 0.00 0.00 0.00 0.00 00:09:49.805 [2024-12-08T05:13:39.924Z] =================================================================================================================== 00:09:49.805 [2024-12-08T05:13:39.924Z] Total : 16860.25 65.86 0.00 0.00 0.00 0.00 0.00 00:09:49.805 00:09:51.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.183 Nvme0n1 : 5.00 16830.40 65.74 0.00 0.00 0.00 0.00 0.00 00:09:51.183 [2024-12-08T05:13:41.302Z] =================================================================================================================== 00:09:51.183 [2024-12-08T05:13:41.302Z] Total : 16830.40 65.74 0.00 0.00 0.00 0.00 0.00 00:09:51.183 00:09:52.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.120 Nvme0n1 : 6.00 16916.83 66.08 0.00 0.00 0.00 0.00 0.00 00:09:52.120 [2024-12-08T05:13:42.239Z] =================================================================================================================== 00:09:52.120 [2024-12-08T05:13:42.239Z] Total : 16916.83 66.08 0.00 0.00 0.00 0.00 0.00 00:09:52.120 00:09:53.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.054 Nvme0n1 : 7.00 16959.86 66.25 0.00 0.00 0.00 0.00 0.00 00:09:53.054 [2024-12-08T05:13:43.173Z] =================================================================================================================== 00:09:53.054 [2024-12-08T05:13:43.173Z] Total : 16959.86 66.25 0.00 0.00 0.00 0.00 0.00 00:09:53.054 00:09:53.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.990 Nvme0n1 : 8.00 17004.62 66.42 0.00 0.00 0.00 0.00 0.00 00:09:53.990 [2024-12-08T05:13:44.109Z] =================================================================================================================== 00:09:53.990 [2024-12-08T05:13:44.109Z] Total : 17004.62 66.42 0.00 0.00 0.00 0.00 0.00 00:09:53.990 00:09:54.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.927 Nvme0n1 : 9.00 17037.00 66.55 0.00 0.00 0.00 0.00 0.00 00:09:54.927 [2024-12-08T05:13:45.046Z] =================================================================================================================== 00:09:54.927 [2024-12-08T05:13:45.046Z] Total : 17037.00 66.55 0.00 0.00 0.00 0.00 0.00 00:09:54.927 00:09:55.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.865 Nvme0n1 : 10.00 17069.80 66.68 0.00 0.00 0.00 0.00 0.00 00:09:55.865 [2024-12-08T05:13:45.984Z] =================================================================================================================== 00:09:55.865 [2024-12-08T05:13:45.984Z] Total : 17069.80 66.68 0.00 0.00 0.00 0.00 0.00 00:09:55.865 00:09:55.865 00:09:55.865 Latency(us) 00:09:55.865 [2024-12-08T05:13:45.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.865 Nvme0n1 : 10.00 17069.52 66.68 0.00 0.00 7494.37 1929.67 14660.65 00:09:55.865 [2024-12-08T05:13:45.984Z] =================================================================================================================== 00:09:55.865 [2024-12-08T05:13:45.984Z] Total : 17069.52 66.68 0.00 0.00 7494.37 1929.67 14660.65 00:09:55.865 { 00:09:55.865 "results": [ 00:09:55.865 { 00:09:55.865 "job": "Nvme0n1", 00:09:55.865 "core_mask": "0x2", 00:09:55.865 "workload": "randwrite", 00:09:55.865 "status": "finished", 00:09:55.865 "queue_depth": 128, 00:09:55.865 "io_size": 4096, 00:09:55.865 "runtime": 10.003912, 00:09:55.865 "iops": 17069.52240283601, 00:09:55.865 "mibps": 66.67782188607816, 00:09:55.865 "io_failed": 0, 00:09:55.865 "io_timeout": 0, 00:09:55.865 "avg_latency_us": 7494.365765234437, 00:09:55.865 "min_latency_us": 1929.671111111111, 00:09:55.865 "max_latency_us": 14660.645925925926 00:09:55.865 } 00:09:55.865 ], 00:09:55.865 "core_count": 1 00:09:55.865 } 00:09:55.865 06:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 973524 00:09:55.865 06:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 973524 ']' 00:09:55.865 06:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 973524 00:09:55.865 06:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:55.865 06:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.865 06:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 973524 00:09:55.865 06:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:55.865 06:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:55.865 06:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 973524' 00:09:55.865 killing process with pid 973524 00:09:55.865 06:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 973524 00:09:55.865 Received shutdown signal, test time was about 10.000000 seconds 00:09:55.865 00:09:55.865 Latency(us) 00:09:55.865 [2024-12-08T05:13:45.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.865 [2024-12-08T05:13:45.984Z] =================================================================================================================== 00:09:55.865 [2024-12-08T05:13:45.984Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:55.865 06:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 973524 00:09:56.123 06:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:56.421 06:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:56.703 06:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4513681b-274b-4046-8c73-b7abf59b3792 00:09:56.703 06:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:56.963 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:56.963 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:56.963 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 970992 00:09:56.963 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 970992 00:09:56.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 970992 Killed "${NVMF_APP[@]}" "$@" 00:09:56.963 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:56.963 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:56.963 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:56.963 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:56.963 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:56.963 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=974999 00:09:56.963 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:56.963 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 974999 00:09:56.963 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 974999 ']' 00:09:56.963 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.963 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.963 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.963 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.963 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:57.222 [2024-12-08 06:13:47.123076] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:09:57.222 [2024-12-08 06:13:47.123175] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.222 [2024-12-08 06:13:47.196929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.222 [2024-12-08 06:13:47.254304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.222 [2024-12-08 06:13:47.254394] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.222 [2024-12-08 06:13:47.254408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.222 [2024-12-08 06:13:47.254419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.222 [2024-12-08 06:13:47.254428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.222 [2024-12-08 06:13:47.255126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.480 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.480 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:57.480 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:57.480 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:57.480 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:57.481 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.481 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:57.740 [2024-12-08 06:13:47.645247] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:57.740 [2024-12-08 06:13:47.645382] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:57.740 [2024-12-08 06:13:47.645429] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:57.740 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:57.740 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a1e9d08a-5164-460f-9c8a-5f9391d431bb 00:09:57.740 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a1e9d08a-5164-460f-9c8a-5f9391d431bb 00:09:57.740 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.740 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:57.740 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.740 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.740 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:57.999 06:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a1e9d08a-5164-460f-9c8a-5f9391d431bb -t 2000 00:09:58.259 [ 00:09:58.259 { 00:09:58.259 "name": "a1e9d08a-5164-460f-9c8a-5f9391d431bb", 00:09:58.259 "aliases": [ 00:09:58.259 "lvs/lvol" 00:09:58.259 ], 00:09:58.259 "product_name": "Logical Volume", 00:09:58.259 "block_size": 4096, 00:09:58.259 "num_blocks": 38912, 00:09:58.259 "uuid": "a1e9d08a-5164-460f-9c8a-5f9391d431bb", 00:09:58.259 "assigned_rate_limits": { 00:09:58.259 "rw_ios_per_sec": 0, 00:09:58.259 "rw_mbytes_per_sec": 0, 00:09:58.259 "r_mbytes_per_sec": 0, 00:09:58.259 "w_mbytes_per_sec": 0 00:09:58.259 }, 00:09:58.259 "claimed": false, 00:09:58.259 "zoned": false, 00:09:58.259 "supported_io_types": { 00:09:58.259 "read": true, 00:09:58.259 "write": true, 00:09:58.259 "unmap": true, 00:09:58.259 "flush": false, 00:09:58.259 "reset": true, 00:09:58.259 "nvme_admin": false, 00:09:58.259 "nvme_io": false, 00:09:58.259 "nvme_io_md": false, 00:09:58.259 "write_zeroes": true, 00:09:58.259 "zcopy": false, 00:09:58.259 "get_zone_info": false, 00:09:58.259 "zone_management": false, 00:09:58.259 "zone_append": false, 00:09:58.259 "compare": false, 00:09:58.259 "compare_and_write": false, 00:09:58.259 "abort": false, 00:09:58.259 "seek_hole": true, 00:09:58.259 "seek_data": true, 00:09:58.259 "copy": false, 00:09:58.259 "nvme_iov_md": false 00:09:58.259 }, 00:09:58.259 "driver_specific": { 00:09:58.259 "lvol": { 00:09:58.259 "lvol_store_uuid": "4513681b-274b-4046-8c73-b7abf59b3792", 00:09:58.259 "base_bdev": "aio_bdev", 00:09:58.259 "thin_provision": false, 00:09:58.259 "num_allocated_clusters": 38, 00:09:58.259 "snapshot": false, 00:09:58.259 "clone": false, 00:09:58.259 "esnap_clone": false 00:09:58.259 } 00:09:58.259 } 00:09:58.259 } 00:09:58.259 ] 00:09:58.259 06:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:58.259 06:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4513681b-274b-4046-8c73-b7abf59b3792 00:09:58.259 06:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:58.518 06:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:58.518 06:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4513681b-274b-4046-8c73-b7abf59b3792 00:09:58.518 06:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:58.776 06:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:58.776 06:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:59.036 [2024-12-08 06:13:48.995043] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:59.036 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4513681b-274b-4046-8c73-b7abf59b3792 00:09:59.036 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:59.036 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4513681b-274b-4046-8c73-b7abf59b3792 00:09:59.036 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:59.036 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:59.036 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:59.036 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:59.036 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:59.036 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:59.036 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:59.036 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:59.036 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4513681b-274b-4046-8c73-b7abf59b3792 00:09:59.296 request: 00:09:59.296 { 00:09:59.296 "uuid": "4513681b-274b-4046-8c73-b7abf59b3792", 00:09:59.296 "method": "bdev_lvol_get_lvstores", 00:09:59.296 "req_id": 1 00:09:59.296 } 00:09:59.296 Got JSON-RPC error response 00:09:59.296 response: 00:09:59.296 { 00:09:59.296 "code": -19, 00:09:59.296 "message": "No such device" 00:09:59.296 } 00:09:59.296 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:59.296 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:59.296 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:59.296 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:59.296 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:59.554 aio_bdev 00:09:59.554 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a1e9d08a-5164-460f-9c8a-5f9391d431bb 00:09:59.554 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a1e9d08a-5164-460f-9c8a-5f9391d431bb 00:09:59.554 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.554 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:59.554 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.554 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.554 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:59.817 06:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a1e9d08a-5164-460f-9c8a-5f9391d431bb -t 2000 00:10:00.075 [ 00:10:00.075 { 00:10:00.075 "name": "a1e9d08a-5164-460f-9c8a-5f9391d431bb", 00:10:00.075 "aliases": [ 00:10:00.075 "lvs/lvol" 00:10:00.075 ], 00:10:00.075 "product_name": "Logical Volume", 00:10:00.075 "block_size": 4096, 00:10:00.075 "num_blocks": 38912, 00:10:00.075 "uuid": "a1e9d08a-5164-460f-9c8a-5f9391d431bb", 00:10:00.075 "assigned_rate_limits": { 00:10:00.075 "rw_ios_per_sec": 0, 00:10:00.075 "rw_mbytes_per_sec": 0, 00:10:00.075 "r_mbytes_per_sec": 0, 00:10:00.075 "w_mbytes_per_sec": 0 00:10:00.075 }, 00:10:00.075 "claimed": false, 00:10:00.075 "zoned": false, 00:10:00.075 "supported_io_types": { 00:10:00.075 "read": true, 00:10:00.075 "write": true, 00:10:00.075 "unmap": true, 00:10:00.075 "flush": false, 00:10:00.075 "reset": true, 00:10:00.075 "nvme_admin": false, 00:10:00.075 "nvme_io": false, 00:10:00.075 "nvme_io_md": false, 00:10:00.075 "write_zeroes": true, 00:10:00.075 "zcopy": false, 00:10:00.075 "get_zone_info": false, 00:10:00.075 "zone_management": false, 00:10:00.075 "zone_append": false, 00:10:00.075 "compare": false, 00:10:00.075 "compare_and_write": false, 00:10:00.075 "abort": false, 00:10:00.075 "seek_hole": true, 00:10:00.075 "seek_data": true, 00:10:00.075 "copy": false, 00:10:00.075 "nvme_iov_md": false 00:10:00.075 }, 00:10:00.075 "driver_specific": { 00:10:00.075 "lvol": { 00:10:00.075 "lvol_store_uuid": "4513681b-274b-4046-8c73-b7abf59b3792", 00:10:00.075 "base_bdev": "aio_bdev", 00:10:00.075 "thin_provision": false, 00:10:00.075 "num_allocated_clusters": 38, 00:10:00.075 "snapshot": false, 00:10:00.075 "clone": false, 00:10:00.075 "esnap_clone": false 00:10:00.075 } 00:10:00.075 } 00:10:00.075 } 00:10:00.075 ] 00:10:00.075 06:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:00.075 06:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:00.075 06:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4513681b-274b-4046-8c73-b7abf59b3792 00:10:00.398 06:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:00.398 06:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4513681b-274b-4046-8c73-b7abf59b3792 00:10:00.398 06:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:00.656 06:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:00.657 06:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a1e9d08a-5164-460f-9c8a-5f9391d431bb 00:10:00.916 06:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4513681b-274b-4046-8c73-b7abf59b3792 00:10:01.175 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:01.433 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:01.433 00:10:01.433 real 0m19.379s 00:10:01.433 user 0m48.788s 00:10:01.433 sys 0m4.894s 00:10:01.433 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.433 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:01.433 ************************************ 00:10:01.433 END TEST lvs_grow_dirty 00:10:01.433 ************************************ 00:10:01.433 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:01.433 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:01.433 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:01.433 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:01.433 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:01.693 nvmf_trace.0 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:01.693 rmmod nvme_tcp 00:10:01.693 rmmod nvme_fabrics 00:10:01.693 rmmod nvme_keyring 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 974999 ']' 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 974999 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 974999 ']' 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 974999 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 974999 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 974999' 00:10:01.693 killing process with pid 974999 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 974999 00:10:01.693 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 974999 00:10:01.953 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:01.953 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:01.953 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:01.953 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:01.953 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:01.953 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:01.953 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:01.953 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:01.953 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:01.953 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.953 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.953 06:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.857 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:03.857 00:10:03.857 real 0m42.594s 00:10:03.857 user 1m12.120s 00:10:03.857 sys 0m8.725s 00:10:03.857 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.857 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:03.857 ************************************ 00:10:03.857 END TEST nvmf_lvs_grow 00:10:03.857 ************************************ 00:10:03.857 06:13:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:03.857 06:13:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:03.857 06:13:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.857 06:13:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.115 ************************************ 00:10:04.115 START TEST nvmf_bdev_io_wait 00:10:04.115 ************************************ 00:10:04.115 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:04.115 * Looking for test storage... 00:10:04.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:04.115 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:04.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.116 --rc genhtml_branch_coverage=1 00:10:04.116 --rc genhtml_function_coverage=1 00:10:04.116 --rc genhtml_legend=1 00:10:04.116 --rc geninfo_all_blocks=1 00:10:04.116 --rc geninfo_unexecuted_blocks=1 00:10:04.116 00:10:04.116 ' 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:04.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.116 --rc genhtml_branch_coverage=1 00:10:04.116 --rc genhtml_function_coverage=1 00:10:04.116 --rc genhtml_legend=1 00:10:04.116 --rc geninfo_all_blocks=1 00:10:04.116 --rc geninfo_unexecuted_blocks=1 00:10:04.116 00:10:04.116 ' 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:04.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.116 --rc genhtml_branch_coverage=1 00:10:04.116 --rc genhtml_function_coverage=1 00:10:04.116 --rc genhtml_legend=1 00:10:04.116 --rc geninfo_all_blocks=1 00:10:04.116 --rc geninfo_unexecuted_blocks=1 00:10:04.116 00:10:04.116 ' 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:04.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.116 --rc genhtml_branch_coverage=1 00:10:04.116 --rc genhtml_function_coverage=1 00:10:04.116 --rc genhtml_legend=1 00:10:04.116 --rc geninfo_all_blocks=1 00:10:04.116 --rc geninfo_unexecuted_blocks=1 00:10:04.116 00:10:04.116 ' 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:04.116 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:06.651 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:06.651 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:06.651 Found net devices under 0000:84:00.0: cvl_0_0 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:06.651 Found net devices under 0000:84:00.1: cvl_0_1 00:10:06.651 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:06.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:10:06.652 00:10:06.652 --- 10.0.0.2 ping statistics --- 00:10:06.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.652 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:10:06.652 00:10:06.652 --- 10.0.0.1 ping statistics --- 00:10:06.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.652 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=977556 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 977556 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 977556 ']' 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.652 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.652 [2024-12-08 06:13:56.581691] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:10:06.652 [2024-12-08 06:13:56.581795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.652 [2024-12-08 06:13:56.656605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.652 [2024-12-08 06:13:56.717587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.652 [2024-12-08 06:13:56.717658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.652 [2024-12-08 06:13:56.717671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.652 [2024-12-08 06:13:56.717682] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.652 [2024-12-08 06:13:56.717691] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.652 [2024-12-08 06:13:56.719479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.652 [2024-12-08 06:13:56.719540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.652 [2024-12-08 06:13:56.719609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.652 [2024-12-08 06:13:56.719612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.912 [2024-12-08 06:13:56.920491] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.912 Malloc0 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.912 [2024-12-08 06:13:56.972046] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=977703 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=977705 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=977707 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:06.912 { 00:10:06.912 "params": { 00:10:06.912 "name": "Nvme$subsystem", 00:10:06.912 "trtype": "$TEST_TRANSPORT", 00:10:06.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:06.912 "adrfam": "ipv4", 00:10:06.912 "trsvcid": "$NVMF_PORT", 00:10:06.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:06.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:06.912 "hdgst": ${hdgst:-false}, 00:10:06.912 "ddgst": ${ddgst:-false} 00:10:06.912 }, 00:10:06.912 "method": "bdev_nvme_attach_controller" 00:10:06.912 } 00:10:06.912 EOF 00:10:06.912 )") 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=977709 00:10:06.912 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:06.912 { 00:10:06.912 "params": { 00:10:06.912 "name": "Nvme$subsystem", 00:10:06.912 "trtype": "$TEST_TRANSPORT", 00:10:06.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:06.912 "adrfam": "ipv4", 00:10:06.912 "trsvcid": "$NVMF_PORT", 00:10:06.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:06.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:06.912 "hdgst": ${hdgst:-false}, 00:10:06.912 "ddgst": ${ddgst:-false} 00:10:06.912 }, 00:10:06.913 "method": "bdev_nvme_attach_controller" 00:10:06.913 } 00:10:06.913 EOF 00:10:06.913 )") 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:06.913 { 00:10:06.913 "params": { 00:10:06.913 "name": "Nvme$subsystem", 00:10:06.913 "trtype": "$TEST_TRANSPORT", 00:10:06.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:06.913 "adrfam": "ipv4", 00:10:06.913 "trsvcid": "$NVMF_PORT", 00:10:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:06.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:06.913 "hdgst": ${hdgst:-false}, 00:10:06.913 "ddgst": ${ddgst:-false} 00:10:06.913 }, 00:10:06.913 "method": "bdev_nvme_attach_controller" 00:10:06.913 } 00:10:06.913 EOF 00:10:06.913 )") 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:06.913 { 00:10:06.913 "params": { 00:10:06.913 "name": "Nvme$subsystem", 00:10:06.913 "trtype": "$TEST_TRANSPORT", 00:10:06.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:06.913 "adrfam": "ipv4", 00:10:06.913 "trsvcid": "$NVMF_PORT", 00:10:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:06.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:06.913 "hdgst": ${hdgst:-false}, 00:10:06.913 "ddgst": ${ddgst:-false} 00:10:06.913 }, 00:10:06.913 "method": "bdev_nvme_attach_controller" 00:10:06.913 } 00:10:06.913 EOF 00:10:06.913 )") 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 977703 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:06.913 "params": { 00:10:06.913 "name": "Nvme1", 00:10:06.913 "trtype": "tcp", 00:10:06.913 "traddr": "10.0.0.2", 00:10:06.913 "adrfam": "ipv4", 00:10:06.913 "trsvcid": "4420", 00:10:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:06.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:06.913 "hdgst": false, 00:10:06.913 "ddgst": false 00:10:06.913 }, 00:10:06.913 "method": "bdev_nvme_attach_controller" 00:10:06.913 }' 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:06.913 "params": { 00:10:06.913 "name": "Nvme1", 00:10:06.913 "trtype": "tcp", 00:10:06.913 "traddr": "10.0.0.2", 00:10:06.913 "adrfam": "ipv4", 00:10:06.913 "trsvcid": "4420", 00:10:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:06.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:06.913 "hdgst": false, 00:10:06.913 "ddgst": false 00:10:06.913 }, 00:10:06.913 "method": "bdev_nvme_attach_controller" 00:10:06.913 }' 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:06.913 "params": { 00:10:06.913 "name": "Nvme1", 00:10:06.913 "trtype": "tcp", 00:10:06.913 "traddr": "10.0.0.2", 00:10:06.913 "adrfam": "ipv4", 00:10:06.913 "trsvcid": "4420", 00:10:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:06.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:06.913 "hdgst": false, 00:10:06.913 "ddgst": false 00:10:06.913 }, 00:10:06.913 "method": "bdev_nvme_attach_controller" 00:10:06.913 }' 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:06.913 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:06.913 "params": { 00:10:06.913 "name": "Nvme1", 00:10:06.913 "trtype": "tcp", 00:10:06.913 "traddr": "10.0.0.2", 00:10:06.913 "adrfam": "ipv4", 00:10:06.913 "trsvcid": "4420", 00:10:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:06.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:06.913 "hdgst": false, 00:10:06.913 "ddgst": false 00:10:06.913 }, 00:10:06.913 "method": "bdev_nvme_attach_controller" 00:10:06.913 }' 00:10:06.913 [2024-12-08 06:13:57.022146] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:10:06.913 [2024-12-08 06:13:57.022146] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:10:06.913 [2024-12-08 06:13:57.022146] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:10:06.913 [2024-12-08 06:13:57.022244] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-08 06:13:57.022244] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-08 06:13:57.022244] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:06.913 --proc-type=auto ] 00:10:06.913 --proc-type=auto ] 00:10:06.913 [2024-12-08 06:13:57.023579] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:10:06.913 [2024-12-08 06:13:57.023649] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:07.171 [2024-12-08 06:13:57.205261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.171 [2024-12-08 06:13:57.260284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:07.428 [2024-12-08 06:13:57.309678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.428 [2024-12-08 06:13:57.364813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:07.428 [2024-12-08 06:13:57.406805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.429 [2024-12-08 06:13:57.463671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:07.429 [2024-12-08 06:13:57.483206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.429 [2024-12-08 06:13:57.533804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:07.685 Running I/O for 1 seconds... 00:10:07.685 Running I/O for 1 seconds... 00:10:07.685 Running I/O for 1 seconds... 00:10:07.685 Running I/O for 1 seconds... 00:10:08.617 10373.00 IOPS, 40.52 MiB/s 00:10:08.617 Latency(us) 00:10:08.617 [2024-12-08T05:13:58.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.617 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:08.617 Nvme1n1 : 1.01 10428.81 40.74 0.00 0.00 12224.47 6116.69 18738.44 00:10:08.617 [2024-12-08T05:13:58.736Z] =================================================================================================================== 00:10:08.617 [2024-12-08T05:13:58.736Z] Total : 10428.81 40.74 0.00 0.00 12224.47 6116.69 18738.44 00:10:08.617 8162.00 IOPS, 31.88 MiB/s 00:10:08.617 Latency(us) 00:10:08.617 [2024-12-08T05:13:58.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.617 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:08.617 Nvme1n1 : 1.01 8206.98 32.06 0.00 0.00 15511.00 8932.31 23204.60 00:10:08.617 [2024-12-08T05:13:58.736Z] =================================================================================================================== 00:10:08.617 [2024-12-08T05:13:58.736Z] Total : 8206.98 32.06 0.00 0.00 15511.00 8932.31 23204.60 00:10:08.617 9587.00 IOPS, 37.45 MiB/s 00:10:08.617 Latency(us) 00:10:08.617 [2024-12-08T05:13:58.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.617 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:08.617 Nvme1n1 : 1.01 9663.48 37.75 0.00 0.00 13200.00 4830.25 24466.77 00:10:08.617 [2024-12-08T05:13:58.736Z] =================================================================================================================== 00:10:08.617 [2024-12-08T05:13:58.736Z] Total : 9663.48 37.75 0.00 0.00 13200.00 4830.25 24466.77 00:10:08.876 189800.00 IOPS, 741.41 MiB/s 00:10:08.876 Latency(us) 00:10:08.876 [2024-12-08T05:13:58.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.876 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:08.876 Nvme1n1 : 1.00 189442.75 740.01 0.00 0.00 672.02 300.37 1881.13 00:10:08.876 [2024-12-08T05:13:58.995Z] =================================================================================================================== 00:10:08.876 [2024-12-08T05:13:58.995Z] Total : 189442.75 740.01 0.00 0.00 672.02 300.37 1881.13 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 977705 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 977707 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 977709 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.876 rmmod nvme_tcp 00:10:08.876 rmmod nvme_fabrics 00:10:08.876 rmmod nvme_keyring 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 977556 ']' 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 977556 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 977556 ']' 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 977556 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.876 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 977556 00:10:09.134 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.134 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.134 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 977556' 00:10:09.134 killing process with pid 977556 00:10:09.134 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 977556 00:10:09.134 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 977556 00:10:09.134 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:09.134 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:09.134 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:09.134 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:09.134 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:09.134 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:09.134 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:09.134 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.134 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.134 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.134 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.134 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.672 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:11.673 00:10:11.673 real 0m7.284s 00:10:11.673 user 0m15.802s 00:10:11.673 sys 0m3.659s 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:11.673 ************************************ 00:10:11.673 END TEST nvmf_bdev_io_wait 00:10:11.673 ************************************ 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.673 ************************************ 00:10:11.673 START TEST nvmf_queue_depth 00:10:11.673 ************************************ 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:11.673 * Looking for test storage... 00:10:11.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:11.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.673 --rc genhtml_branch_coverage=1 00:10:11.673 --rc genhtml_function_coverage=1 00:10:11.673 --rc genhtml_legend=1 00:10:11.673 --rc geninfo_all_blocks=1 00:10:11.673 --rc geninfo_unexecuted_blocks=1 00:10:11.673 00:10:11.673 ' 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:11.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.673 --rc genhtml_branch_coverage=1 00:10:11.673 --rc genhtml_function_coverage=1 00:10:11.673 --rc genhtml_legend=1 00:10:11.673 --rc geninfo_all_blocks=1 00:10:11.673 --rc geninfo_unexecuted_blocks=1 00:10:11.673 00:10:11.673 ' 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:11.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.673 --rc genhtml_branch_coverage=1 00:10:11.673 --rc genhtml_function_coverage=1 00:10:11.673 --rc genhtml_legend=1 00:10:11.673 --rc geninfo_all_blocks=1 00:10:11.673 --rc geninfo_unexecuted_blocks=1 00:10:11.673 00:10:11.673 ' 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:11.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.673 --rc genhtml_branch_coverage=1 00:10:11.673 --rc genhtml_function_coverage=1 00:10:11.673 --rc genhtml_legend=1 00:10:11.673 --rc geninfo_all_blocks=1 00:10:11.673 --rc geninfo_unexecuted_blocks=1 00:10:11.673 00:10:11.673 ' 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.673 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.674 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:13.581 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:13.582 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:13.582 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:13.582 Found net devices under 0000:84:00.0: cvl_0_0 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:13.582 Found net devices under 0000:84:00.1: cvl_0_1 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.582 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:13.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:10:13.841 00:10:13.841 --- 10.0.0.2 ping statistics --- 00:10:13.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.841 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:10:13.841 00:10:13.841 --- 10.0.0.1 ping statistics --- 00:10:13.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.841 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=979956 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 979956 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 979956 ']' 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.841 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.841 [2024-12-08 06:14:03.892503] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:10:13.841 [2024-12-08 06:14:03.892603] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.099 [2024-12-08 06:14:03.966718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.099 [2024-12-08 06:14:04.019004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.099 [2024-12-08 06:14:04.019086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.099 [2024-12-08 06:14:04.019110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.099 [2024-12-08 06:14:04.019121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.099 [2024-12-08 06:14:04.019130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.099 [2024-12-08 06:14:04.019828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.099 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.099 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:14.099 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:14.099 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:14.099 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.099 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.099 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.099 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.099 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.099 [2024-12-08 06:14:04.158938] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.099 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.099 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:14.099 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.099 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.099 Malloc0 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.100 [2024-12-08 06:14:04.207610] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=979977 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 979977 /var/tmp/bdevperf.sock 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 979977 ']' 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:14.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.100 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.359 [2024-12-08 06:14:04.253904] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:10:14.359 [2024-12-08 06:14:04.253971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid979977 ] 00:10:14.359 [2024-12-08 06:14:04.319428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.359 [2024-12-08 06:14:04.376096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.618 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.618 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:14.618 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:14.618 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.618 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.618 NVMe0n1 00:10:14.618 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.618 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:14.618 Running I/O for 10 seconds... 00:10:16.934 9136.00 IOPS, 35.69 MiB/s [2024-12-08T05:14:07.992Z] 9271.50 IOPS, 36.22 MiB/s [2024-12-08T05:14:08.927Z] 9540.67 IOPS, 37.27 MiB/s [2024-12-08T05:14:09.863Z] 9524.75 IOPS, 37.21 MiB/s [2024-12-08T05:14:10.799Z] 9616.40 IOPS, 37.56 MiB/s [2024-12-08T05:14:11.738Z] 9579.83 IOPS, 37.42 MiB/s [2024-12-08T05:14:13.118Z] 9634.00 IOPS, 37.63 MiB/s [2024-12-08T05:14:13.688Z] 9658.88 IOPS, 37.73 MiB/s [2024-12-08T05:14:15.067Z] 9678.33 IOPS, 37.81 MiB/s [2024-12-08T05:14:15.067Z] 9715.20 IOPS, 37.95 MiB/s 00:10:24.948 Latency(us) 00:10:24.948 [2024-12-08T05:14:15.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:24.948 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:24.948 Verification LBA range: start 0x0 length 0x4000 00:10:24.948 NVMe0n1 : 10.07 9745.58 38.07 0.00 0.00 104694.71 20777.34 69128.34 00:10:24.948 [2024-12-08T05:14:15.067Z] =================================================================================================================== 00:10:24.948 [2024-12-08T05:14:15.067Z] Total : 9745.58 38.07 0.00 0.00 104694.71 20777.34 69128.34 00:10:24.948 { 00:10:24.948 "results": [ 00:10:24.948 { 00:10:24.948 "job": "NVMe0n1", 00:10:24.948 "core_mask": "0x1", 00:10:24.948 "workload": "verify", 00:10:24.948 "status": "finished", 00:10:24.948 "verify_range": { 00:10:24.948 "start": 0, 00:10:24.948 "length": 16384 00:10:24.948 }, 00:10:24.948 "queue_depth": 1024, 00:10:24.948 "io_size": 4096, 00:10:24.948 "runtime": 10.073905, 00:10:24.948 "iops": 9745.575325556474, 00:10:24.948 "mibps": 38.06865361545498, 00:10:24.948 "io_failed": 0, 00:10:24.948 "io_timeout": 0, 00:10:24.948 "avg_latency_us": 104694.71482350669, 00:10:24.948 "min_latency_us": 20777.33925925926, 00:10:24.948 "max_latency_us": 69128.34370370371 00:10:24.948 } 00:10:24.948 ], 00:10:24.948 "core_count": 1 00:10:24.948 } 00:10:24.948 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 979977 00:10:24.948 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 979977 ']' 00:10:24.948 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 979977 00:10:24.948 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:24.948 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.948 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 979977 00:10:24.948 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.948 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.948 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 979977' 00:10:24.948 killing process with pid 979977 00:10:24.948 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 979977 00:10:24.948 Received shutdown signal, test time was about 10.000000 seconds 00:10:24.948 00:10:24.948 Latency(us) 00:10:24.948 [2024-12-08T05:14:15.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:24.948 [2024-12-08T05:14:15.067Z] =================================================================================================================== 00:10:24.948 [2024-12-08T05:14:15.067Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:24.948 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 979977 00:10:24.948 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:24.948 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:24.948 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:24.948 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:24.948 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:24.948 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:24.948 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:24.948 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:24.948 rmmod nvme_tcp 00:10:24.948 rmmod nvme_fabrics 00:10:25.207 rmmod nvme_keyring 00:10:25.207 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:25.207 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:25.207 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:25.207 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 979956 ']' 00:10:25.207 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 979956 00:10:25.207 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 979956 ']' 00:10:25.207 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 979956 00:10:25.207 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:25.207 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.207 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 979956 00:10:25.207 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:25.207 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:25.207 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 979956' 00:10:25.207 killing process with pid 979956 00:10:25.207 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 979956 00:10:25.207 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 979956 00:10:25.467 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:25.467 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:25.467 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:25.467 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:25.467 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:25.467 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:25.467 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:25.467 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:25.467 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:25.467 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.467 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.467 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.371 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:27.371 00:10:27.371 real 0m16.118s 00:10:27.371 user 0m22.148s 00:10:27.371 sys 0m3.444s 00:10:27.371 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.371 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:27.371 ************************************ 00:10:27.371 END TEST nvmf_queue_depth 00:10:27.371 ************************************ 00:10:27.371 06:14:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:27.371 06:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:27.371 06:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.371 06:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:27.371 ************************************ 00:10:27.371 START TEST nvmf_target_multipath 00:10:27.371 ************************************ 00:10:27.371 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:27.631 * Looking for test storage... 00:10:27.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:27.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.631 --rc genhtml_branch_coverage=1 00:10:27.631 --rc genhtml_function_coverage=1 00:10:27.631 --rc genhtml_legend=1 00:10:27.631 --rc geninfo_all_blocks=1 00:10:27.631 --rc geninfo_unexecuted_blocks=1 00:10:27.631 00:10:27.631 ' 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:27.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.631 --rc genhtml_branch_coverage=1 00:10:27.631 --rc genhtml_function_coverage=1 00:10:27.631 --rc genhtml_legend=1 00:10:27.631 --rc geninfo_all_blocks=1 00:10:27.631 --rc geninfo_unexecuted_blocks=1 00:10:27.631 00:10:27.631 ' 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:27.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.631 --rc genhtml_branch_coverage=1 00:10:27.631 --rc genhtml_function_coverage=1 00:10:27.631 --rc genhtml_legend=1 00:10:27.631 --rc geninfo_all_blocks=1 00:10:27.631 --rc geninfo_unexecuted_blocks=1 00:10:27.631 00:10:27.631 ' 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:27.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.631 --rc genhtml_branch_coverage=1 00:10:27.631 --rc genhtml_function_coverage=1 00:10:27.631 --rc genhtml_legend=1 00:10:27.631 --rc geninfo_all_blocks=1 00:10:27.631 --rc geninfo_unexecuted_blocks=1 00:10:27.631 00:10:27.631 ' 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.631 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:27.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:27.632 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:30.165 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:30.166 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:30.166 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:30.166 Found net devices under 0000:84:00.0: cvl_0_0 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:30.166 Found net devices under 0000:84:00.1: cvl_0_1 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:30.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:10:30.166 00:10:30.166 --- 10.0.0.2 ping statistics --- 00:10:30.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.166 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:30.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:10:30.166 00:10:30.166 --- 10.0.0.1 ping statistics --- 00:10:30.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.166 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:30.166 only one NIC for nvmf test 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:30.166 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:30.166 rmmod nvme_tcp 00:10:30.166 rmmod nvme_fabrics 00:10:30.166 rmmod nvme_keyring 00:10:30.166 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:30.166 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:30.166 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:30.166 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:30.166 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:30.166 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:30.166 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:30.166 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:30.166 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:30.166 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:30.166 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:30.166 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:30.166 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:30.166 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.166 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.166 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:32.140 00:10:32.140 real 0m4.626s 00:10:32.140 user 0m0.936s 00:10:32.140 sys 0m1.704s 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:32.140 ************************************ 00:10:32.140 END TEST nvmf_target_multipath 00:10:32.140 ************************************ 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.140 06:14:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:32.141 ************************************ 00:10:32.141 START TEST nvmf_zcopy 00:10:32.141 ************************************ 00:10:32.141 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:32.141 * Looking for test storage... 00:10:32.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.141 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:32.141 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:10:32.141 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:32.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.399 --rc genhtml_branch_coverage=1 00:10:32.399 --rc genhtml_function_coverage=1 00:10:32.399 --rc genhtml_legend=1 00:10:32.399 --rc geninfo_all_blocks=1 00:10:32.399 --rc geninfo_unexecuted_blocks=1 00:10:32.399 00:10:32.399 ' 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:32.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.399 --rc genhtml_branch_coverage=1 00:10:32.399 --rc genhtml_function_coverage=1 00:10:32.399 --rc genhtml_legend=1 00:10:32.399 --rc geninfo_all_blocks=1 00:10:32.399 --rc geninfo_unexecuted_blocks=1 00:10:32.399 00:10:32.399 ' 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:32.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.399 --rc genhtml_branch_coverage=1 00:10:32.399 --rc genhtml_function_coverage=1 00:10:32.399 --rc genhtml_legend=1 00:10:32.399 --rc geninfo_all_blocks=1 00:10:32.399 --rc geninfo_unexecuted_blocks=1 00:10:32.399 00:10:32.399 ' 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:32.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.399 --rc genhtml_branch_coverage=1 00:10:32.399 --rc genhtml_function_coverage=1 00:10:32.399 --rc genhtml_legend=1 00:10:32.399 --rc geninfo_all_blocks=1 00:10:32.399 --rc geninfo_unexecuted_blocks=1 00:10:32.399 00:10:32.399 ' 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:32.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:32.399 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.400 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:32.400 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:32.400 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:32.400 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.400 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.400 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.400 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:32.400 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:32.400 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:32.400 06:14:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.931 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:34.931 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:34.931 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:34.931 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:34.931 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:34.931 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:34.931 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:34.931 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:34.931 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:34.931 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:34.931 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:34.931 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:34.931 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:34.932 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:34.932 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:34.932 Found net devices under 0000:84:00.0: cvl_0_0 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:34.932 Found net devices under 0000:84:00.1: cvl_0_1 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:34.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:34.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:10:34.932 00:10:34.932 --- 10.0.0.2 ping statistics --- 00:10:34.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.932 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:34.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:34.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:10:34.932 00:10:34.932 --- 10.0.0.1 ping statistics --- 00:10:34.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.932 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=985224 00:10:34.932 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 985224 00:10:34.933 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 985224 ']' 00:10:34.933 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:34.933 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.933 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.933 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.933 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.933 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.933 [2024-12-08 06:14:24.774994] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:10:34.933 [2024-12-08 06:14:24.775119] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.933 [2024-12-08 06:14:24.848243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.933 [2024-12-08 06:14:24.905604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.933 [2024-12-08 06:14:24.905678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.933 [2024-12-08 06:14:24.905707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.933 [2024-12-08 06:14:24.905718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.933 [2024-12-08 06:14:24.905737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.933 [2024-12-08 06:14:24.906517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.933 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.933 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:34.933 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:34.933 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:34.933 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.191 [2024-12-08 06:14:25.058501] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.191 [2024-12-08 06:14:25.074747] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.191 malloc0 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:35.191 { 00:10:35.191 "params": { 00:10:35.191 "name": "Nvme$subsystem", 00:10:35.191 "trtype": "$TEST_TRANSPORT", 00:10:35.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:35.191 "adrfam": "ipv4", 00:10:35.191 "trsvcid": "$NVMF_PORT", 00:10:35.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:35.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:35.191 "hdgst": ${hdgst:-false}, 00:10:35.191 "ddgst": ${ddgst:-false} 00:10:35.191 }, 00:10:35.191 "method": "bdev_nvme_attach_controller" 00:10:35.191 } 00:10:35.191 EOF 00:10:35.191 )") 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:35.191 06:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:35.191 "params": { 00:10:35.191 "name": "Nvme1", 00:10:35.191 "trtype": "tcp", 00:10:35.192 "traddr": "10.0.0.2", 00:10:35.192 "adrfam": "ipv4", 00:10:35.192 "trsvcid": "4420", 00:10:35.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:35.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:35.192 "hdgst": false, 00:10:35.192 "ddgst": false 00:10:35.192 }, 00:10:35.192 "method": "bdev_nvme_attach_controller" 00:10:35.192 }' 00:10:35.192 [2024-12-08 06:14:25.157355] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:10:35.192 [2024-12-08 06:14:25.157429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid985250 ] 00:10:35.192 [2024-12-08 06:14:25.223981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.192 [2024-12-08 06:14:25.283994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.760 Running I/O for 10 seconds... 00:10:37.633 6435.00 IOPS, 50.27 MiB/s [2024-12-08T05:14:28.692Z] 6437.50 IOPS, 50.29 MiB/s [2024-12-08T05:14:29.633Z] 6445.67 IOPS, 50.36 MiB/s [2024-12-08T05:14:31.013Z] 6453.00 IOPS, 50.41 MiB/s [2024-12-08T05:14:31.955Z] 6438.00 IOPS, 50.30 MiB/s [2024-12-08T05:14:32.899Z] 6458.67 IOPS, 50.46 MiB/s [2024-12-08T05:14:33.837Z] 6456.71 IOPS, 50.44 MiB/s [2024-12-08T05:14:34.775Z] 6453.62 IOPS, 50.42 MiB/s [2024-12-08T05:14:35.716Z] 6459.22 IOPS, 50.46 MiB/s [2024-12-08T05:14:35.716Z] 6461.20 IOPS, 50.48 MiB/s 00:10:45.597 Latency(us) 00:10:45.597 [2024-12-08T05:14:35.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.597 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:45.597 Verification LBA range: start 0x0 length 0x1000 00:10:45.597 Nvme1n1 : 10.01 6462.56 50.49 0.00 0.00 19755.98 2633.58 27185.30 00:10:45.597 [2024-12-08T05:14:35.716Z] =================================================================================================================== 00:10:45.597 [2024-12-08T05:14:35.716Z] Total : 6462.56 50.49 0.00 0.00 19755.98 2633.58 27185.30 00:10:45.858 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=986569 00:10:45.858 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:45.858 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.858 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:45.858 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:45.858 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:45.858 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:45.858 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:45.858 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:45.858 { 00:10:45.858 "params": { 00:10:45.858 "name": "Nvme$subsystem", 00:10:45.858 "trtype": "$TEST_TRANSPORT", 00:10:45.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:45.858 "adrfam": "ipv4", 00:10:45.858 "trsvcid": "$NVMF_PORT", 00:10:45.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:45.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:45.858 "hdgst": ${hdgst:-false}, 00:10:45.858 "ddgst": ${ddgst:-false} 00:10:45.858 }, 00:10:45.858 "method": "bdev_nvme_attach_controller" 00:10:45.858 } 00:10:45.858 EOF 00:10:45.858 )") 00:10:45.858 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:45.858 [2024-12-08 06:14:35.853228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.858 [2024-12-08 06:14:35.853268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.858 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:45.858 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:45.858 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:45.858 "params": { 00:10:45.858 "name": "Nvme1", 00:10:45.858 "trtype": "tcp", 00:10:45.858 "traddr": "10.0.0.2", 00:10:45.858 "adrfam": "ipv4", 00:10:45.858 "trsvcid": "4420", 00:10:45.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:45.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:45.858 "hdgst": false, 00:10:45.858 "ddgst": false 00:10:45.858 }, 00:10:45.858 "method": "bdev_nvme_attach_controller" 00:10:45.858 }' 00:10:45.858 [2024-12-08 06:14:35.861184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.858 [2024-12-08 06:14:35.861205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.858 [2024-12-08 06:14:35.869205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.858 [2024-12-08 06:14:35.869225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.858 [2024-12-08 06:14:35.877226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.858 [2024-12-08 06:14:35.877246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.858 [2024-12-08 06:14:35.885249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.858 [2024-12-08 06:14:35.885269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.858 [2024-12-08 06:14:35.890794] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:10:45.858 [2024-12-08 06:14:35.890871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid986569 ] 00:10:45.858 [2024-12-08 06:14:35.893268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.858 [2024-12-08 06:14:35.893287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.858 [2024-12-08 06:14:35.901290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.858 [2024-12-08 06:14:35.901310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.858 [2024-12-08 06:14:35.909313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.858 [2024-12-08 06:14:35.909342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.858 [2024-12-08 06:14:35.917334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.858 [2024-12-08 06:14:35.917353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.858 [2024-12-08 06:14:35.925355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.858 [2024-12-08 06:14:35.925375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.858 [2024-12-08 06:14:35.933378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.858 [2024-12-08 06:14:35.933399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.858 [2024-12-08 06:14:35.941397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.858 [2024-12-08 06:14:35.941417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.858 [2024-12-08 06:14:35.949418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.858 [2024-12-08 06:14:35.949436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.858 [2024-12-08 06:14:35.957440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.858 [2024-12-08 06:14:35.957459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.858 [2024-12-08 06:14:35.960117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.858 [2024-12-08 06:14:35.965469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.858 [2024-12-08 06:14:35.965489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.858 [2024-12-08 06:14:35.973523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.858 [2024-12-08 06:14:35.973560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.117 [2024-12-08 06:14:35.981515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.117 [2024-12-08 06:14:35.981541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.117 [2024-12-08 06:14:35.989529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.117 [2024-12-08 06:14:35.989549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.117 [2024-12-08 06:14:35.997549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.117 [2024-12-08 06:14:35.997569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.117 [2024-12-08 06:14:36.005571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.117 [2024-12-08 06:14:36.005591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.117 [2024-12-08 06:14:36.013595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.117 [2024-12-08 06:14:36.013614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.117 [2024-12-08 06:14:36.021466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.117 [2024-12-08 06:14:36.021620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.117 [2024-12-08 06:14:36.021640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.117 [2024-12-08 06:14:36.029639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.117 [2024-12-08 06:14:36.029658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.117 [2024-12-08 06:14:36.037689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.117 [2024-12-08 06:14:36.037742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.117 [2024-12-08 06:14:36.045740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.117 [2024-12-08 06:14:36.045776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.117 [2024-12-08 06:14:36.053766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.053817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.061784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.061824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.069807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.069845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.077832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.077873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.085843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.085883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.093835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.093857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.101885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.101921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.109910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.109948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.117939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.117984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.125922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.125944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.133943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.133965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.141973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.141997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.149998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.150036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.158030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.158052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.166078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.166101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.174072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.174094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.182090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.182112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.190111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.190131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.198147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.198167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.206163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.206190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.214184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.214203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.222210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.222231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.118 [2024-12-08 06:14:36.230232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.118 [2024-12-08 06:14:36.230255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.378 [2024-12-08 06:14:36.238259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.378 [2024-12-08 06:14:36.238281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.378 [2024-12-08 06:14:36.246274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.378 [2024-12-08 06:14:36.246295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.378 [2024-12-08 06:14:36.254298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.378 [2024-12-08 06:14:36.254319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.378 [2024-12-08 06:14:36.262321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.378 [2024-12-08 06:14:36.262340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.378 [2024-12-08 06:14:36.270349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.378 [2024-12-08 06:14:36.270371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.378 [2024-12-08 06:14:36.278366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.378 [2024-12-08 06:14:36.278386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.378 [2024-12-08 06:14:36.286387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.378 [2024-12-08 06:14:36.286406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.378 [2024-12-08 06:14:36.294410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.378 [2024-12-08 06:14:36.294430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.378 [2024-12-08 06:14:36.302432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.378 [2024-12-08 06:14:36.302451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.378 [2024-12-08 06:14:36.310457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.378 [2024-12-08 06:14:36.310477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.378 [2024-12-08 06:14:36.318478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.378 [2024-12-08 06:14:36.318498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.378 [2024-12-08 06:14:36.326506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.378 [2024-12-08 06:14:36.326530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.378 [2024-12-08 06:14:36.334524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.378 [2024-12-08 06:14:36.334545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.378 Running I/O for 5 seconds... 00:10:46.378 [2024-12-08 06:14:36.346343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.378 [2024-12-08 06:14:36.346368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.379 [2024-12-08 06:14:36.355941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.379 [2024-12-08 06:14:36.355968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.379 [2024-12-08 06:14:36.366150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.379 [2024-12-08 06:14:36.366174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.379 [2024-12-08 06:14:36.377323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.379 [2024-12-08 06:14:36.377349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.379 [2024-12-08 06:14:36.389518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.379 [2024-12-08 06:14:36.389543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.379 [2024-12-08 06:14:36.399647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.379 [2024-12-08 06:14:36.399672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.379 [2024-12-08 06:14:36.409834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.379 [2024-12-08 06:14:36.409860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.379 [2024-12-08 06:14:36.420271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.379 [2024-12-08 06:14:36.420295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.379 [2024-12-08 06:14:36.430831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.379 [2024-12-08 06:14:36.430857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.379 [2024-12-08 06:14:36.441256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.379 [2024-12-08 06:14:36.441280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.379 [2024-12-08 06:14:36.453611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.379 [2024-12-08 06:14:36.453636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.379 [2024-12-08 06:14:36.463336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.379 [2024-12-08 06:14:36.463360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.379 [2024-12-08 06:14:36.473307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.379 [2024-12-08 06:14:36.473332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.379 [2024-12-08 06:14:36.483807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.379 [2024-12-08 06:14:36.483833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.379 [2024-12-08 06:14:36.494247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.379 [2024-12-08 06:14:36.494276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.637 [2024-12-08 06:14:36.504951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.637 [2024-12-08 06:14:36.504978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.637 [2024-12-08 06:14:36.517306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.637 [2024-12-08 06:14:36.517330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.637 [2024-12-08 06:14:36.527089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.637 [2024-12-08 06:14:36.527114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.537989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.538030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.550191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.550216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.559836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.559863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.569938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.569965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.580083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.580110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.590599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.590623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.601170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.601194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.613502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.613527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.623181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.623205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.633741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.633768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.644280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.644304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.656629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.656654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.665832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.665858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.676750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.676777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.687164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.687190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.697405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.697430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.708038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.708064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.718610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.718635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.729171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.729202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.739681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.739728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.638 [2024-12-08 06:14:36.750343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.638 [2024-12-08 06:14:36.750368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.899 [2024-12-08 06:14:36.762664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.762712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.772522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.772545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.782802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.782829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.793018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.793044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.802947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.802973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.813002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.813042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.823104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.823139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.834025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.834050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.844445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.844470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.854898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.854924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.868050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.868091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.879351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.879375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.889113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.889137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.900041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.900066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.912949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.912976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.922588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.922613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.933089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.933114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.943259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.943284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.952975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.953001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.963330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.963363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.973361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.973385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.983828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.983854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:36.994214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:36.994239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:37.004345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:37.004369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.900 [2024-12-08 06:14:37.014895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.900 [2024-12-08 06:14:37.014922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.159 [2024-12-08 06:14:37.027049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.159 [2024-12-08 06:14:37.027074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.159 [2024-12-08 06:14:37.036853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.159 [2024-12-08 06:14:37.036878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.159 [2024-12-08 06:14:37.047131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.159 [2024-12-08 06:14:37.047155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.159 [2024-12-08 06:14:37.057288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.159 [2024-12-08 06:14:37.057313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.159 [2024-12-08 06:14:37.067654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.159 [2024-12-08 06:14:37.067678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.159 [2024-12-08 06:14:37.079853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.159 [2024-12-08 06:14:37.079878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.159 [2024-12-08 06:14:37.089390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.159 [2024-12-08 06:14:37.089415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.159 [2024-12-08 06:14:37.099509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.159 [2024-12-08 06:14:37.099534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.159 [2024-12-08 06:14:37.109567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.159 [2024-12-08 06:14:37.109591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.159 [2024-12-08 06:14:37.119951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.159 [2024-12-08 06:14:37.119976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.159 [2024-12-08 06:14:37.130294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.159 [2024-12-08 06:14:37.130318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.159 [2024-12-08 06:14:37.141275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.159 [2024-12-08 06:14:37.141300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.159 [2024-12-08 06:14:37.151464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.159 [2024-12-08 06:14:37.151488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.160 [2024-12-08 06:14:37.161844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.160 [2024-12-08 06:14:37.161881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.160 [2024-12-08 06:14:37.173831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.160 [2024-12-08 06:14:37.173857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.160 [2024-12-08 06:14:37.183867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.160 [2024-12-08 06:14:37.183893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.160 [2024-12-08 06:14:37.193648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.160 [2024-12-08 06:14:37.193673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.160 [2024-12-08 06:14:37.203782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.160 [2024-12-08 06:14:37.203808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.160 [2024-12-08 06:14:37.213875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.160 [2024-12-08 06:14:37.213901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.160 [2024-12-08 06:14:37.224099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.160 [2024-12-08 06:14:37.224123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.160 [2024-12-08 06:14:37.234195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.160 [2024-12-08 06:14:37.234219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.160 [2024-12-08 06:14:37.245070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.160 [2024-12-08 06:14:37.245108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.160 [2024-12-08 06:14:37.257717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.160 [2024-12-08 06:14:37.257751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.160 [2024-12-08 06:14:37.269230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.160 [2024-12-08 06:14:37.269254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.160 [2024-12-08 06:14:37.277807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.160 [2024-12-08 06:14:37.277834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.290306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.290330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.300286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.300311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.311153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.311178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.321743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.321782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.332501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.332525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 12220.00 IOPS, 95.47 MiB/s [2024-12-08T05:14:37.540Z] [2024-12-08 06:14:37.344588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.344613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.354317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.354341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.364649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.364673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.375032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.375056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.385709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.385758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.396259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.396284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.406619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.406643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.419468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.419493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.430881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.430907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.440117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.440142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.451191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.451215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.463479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.463502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.473520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.473544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.483604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.483628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.494111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.494136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.504331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.504354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.514574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.514598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.525512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.525535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.421 [2024-12-08 06:14:37.537447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.421 [2024-12-08 06:14:37.537471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.681 [2024-12-08 06:14:37.547571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.681 [2024-12-08 06:14:37.547596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.681 [2024-12-08 06:14:37.558350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.681 [2024-12-08 06:14:37.558374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.681 [2024-12-08 06:14:37.568795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.681 [2024-12-08 06:14:37.568821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.681 [2024-12-08 06:14:37.579529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.681 [2024-12-08 06:14:37.579553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.681 [2024-12-08 06:14:37.591915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.681 [2024-12-08 06:14:37.591940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.681 [2024-12-08 06:14:37.601441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.681 [2024-12-08 06:14:37.601465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.681 [2024-12-08 06:14:37.611796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.681 [2024-12-08 06:14:37.611822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.681 [2024-12-08 06:14:37.621646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.681 [2024-12-08 06:14:37.621671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.681 [2024-12-08 06:14:37.631971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.681 [2024-12-08 06:14:37.631996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.681 [2024-12-08 06:14:37.642299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.681 [2024-12-08 06:14:37.642323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.681 [2024-12-08 06:14:37.652573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.681 [2024-12-08 06:14:37.652598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.681 [2024-12-08 06:14:37.663073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.681 [2024-12-08 06:14:37.663098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.681 [2024-12-08 06:14:37.674660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.681 [2024-12-08 06:14:37.674685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.681 [2024-12-08 06:14:37.683892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.681 [2024-12-08 06:14:37.683918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.681 [2024-12-08 06:14:37.694377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.681 [2024-12-08 06:14:37.694402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.681 [2024-12-08 06:14:37.705297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.681 [2024-12-08 06:14:37.705324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.681 [2024-12-08 06:14:37.715890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.682 [2024-12-08 06:14:37.715916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.682 [2024-12-08 06:14:37.726262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.682 [2024-12-08 06:14:37.726287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.682 [2024-12-08 06:14:37.736969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.682 [2024-12-08 06:14:37.737011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.682 [2024-12-08 06:14:37.747198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.682 [2024-12-08 06:14:37.747222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.682 [2024-12-08 06:14:37.757415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.682 [2024-12-08 06:14:37.757440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.682 [2024-12-08 06:14:37.768010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.682 [2024-12-08 06:14:37.768036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.682 [2024-12-08 06:14:37.778202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.682 [2024-12-08 06:14:37.778227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.682 [2024-12-08 06:14:37.788343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.682 [2024-12-08 06:14:37.788368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.682 [2024-12-08 06:14:37.798619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.682 [2024-12-08 06:14:37.798645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.941 [2024-12-08 06:14:37.809182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.941 [2024-12-08 06:14:37.809208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.941 [2024-12-08 06:14:37.821184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.941 [2024-12-08 06:14:37.821208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.941 [2024-12-08 06:14:37.831197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.941 [2024-12-08 06:14:37.831222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.941 [2024-12-08 06:14:37.841130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.941 [2024-12-08 06:14:37.841155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.941 [2024-12-08 06:14:37.851450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.941 [2024-12-08 06:14:37.851475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.941 [2024-12-08 06:14:37.861487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.941 [2024-12-08 06:14:37.861512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.941 [2024-12-08 06:14:37.871550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.941 [2024-12-08 06:14:37.871574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.941 [2024-12-08 06:14:37.881471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.941 [2024-12-08 06:14:37.881495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.941 [2024-12-08 06:14:37.891464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.941 [2024-12-08 06:14:37.891488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.941 [2024-12-08 06:14:37.901195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.941 [2024-12-08 06:14:37.901220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.941 [2024-12-08 06:14:37.912237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.941 [2024-12-08 06:14:37.912263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.941 [2024-12-08 06:14:37.924382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.941 [2024-12-08 06:14:37.924407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.941 [2024-12-08 06:14:37.933394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.942 [2024-12-08 06:14:37.933419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.942 [2024-12-08 06:14:37.943891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.942 [2024-12-08 06:14:37.943916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.942 [2024-12-08 06:14:37.953928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.942 [2024-12-08 06:14:37.953954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.942 [2024-12-08 06:14:37.964231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.942 [2024-12-08 06:14:37.964255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.942 [2024-12-08 06:14:37.974953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.942 [2024-12-08 06:14:37.974979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.942 [2024-12-08 06:14:37.985771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.942 [2024-12-08 06:14:37.985797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.942 [2024-12-08 06:14:37.996045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.942 [2024-12-08 06:14:37.996085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.942 [2024-12-08 06:14:38.006458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.942 [2024-12-08 06:14:38.006481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.942 [2024-12-08 06:14:38.016586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.942 [2024-12-08 06:14:38.016610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.942 [2024-12-08 06:14:38.026997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.942 [2024-12-08 06:14:38.027036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.942 [2024-12-08 06:14:38.040399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.942 [2024-12-08 06:14:38.040422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.942 [2024-12-08 06:14:38.050114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.942 [2024-12-08 06:14:38.050139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.942 [2024-12-08 06:14:38.060162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.942 [2024-12-08 06:14:38.060189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.070678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.070718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.082991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.083029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.092656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.092680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.102910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.102935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.113600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.113625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.124614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.124639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.135039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.135064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.144978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.145019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.155282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.155316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.167986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.168027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.177629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.177654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.187632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.187656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.198059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.198097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.208356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.208380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.218849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.218874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.229786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.229811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.242181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.242204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.252260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.252284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.262590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.262615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.273118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.273143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.283267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.283292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.293352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.293375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.303856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.303882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.202 [2024-12-08 06:14:38.314118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.202 [2024-12-08 06:14:38.314142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.463 [2024-12-08 06:14:38.324818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.463 [2024-12-08 06:14:38.324847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.463 [2024-12-08 06:14:38.335125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.463 [2024-12-08 06:14:38.335149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.463 12256.00 IOPS, 95.75 MiB/s [2024-12-08T05:14:38.582Z] [2024-12-08 06:14:38.345872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.463 [2024-12-08 06:14:38.345899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.463 [2024-12-08 06:14:38.356462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.356504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.369903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.369936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.380000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.380038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.390213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.390237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.400955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.400981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.411831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.411857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.423223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.423248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.433540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.433563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.444143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.444167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.454413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.454437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.464372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.464396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.474664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.474688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.486946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.486971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.496527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.496551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.506997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.507039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.519549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.519573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.529741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.529767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.540108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.540133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.550715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.550762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.561404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.561437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.464 [2024-12-08 06:14:38.573905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.464 [2024-12-08 06:14:38.573930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.724 [2024-12-08 06:14:38.584296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.724 [2024-12-08 06:14:38.584321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.724 [2024-12-08 06:14:38.594896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.724 [2024-12-08 06:14:38.594922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.724 [2024-12-08 06:14:38.607324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.724 [2024-12-08 06:14:38.607347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.724 [2024-12-08 06:14:38.617051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.724 [2024-12-08 06:14:38.617090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.724 [2024-12-08 06:14:38.627132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.627155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.637491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.637515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.649625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.649649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.658915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.658940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.669632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.669656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.680381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.680406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.690715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.690752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.700875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.700901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.711311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.711335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.721600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.721624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.732092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.732117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.743952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.743977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.753558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.753582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.763258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.763282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.773932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.773958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.784326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.784350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.794621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.794645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.806815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.806840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.816071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.816096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.826059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.826098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.725 [2024-12-08 06:14:38.836121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.725 [2024-12-08 06:14:38.836147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:38.847773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:38.847805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:38.858278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:38.858303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:38.871140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:38.871165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:38.881046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:38.881086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:38.891104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:38.891129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:38.901362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:38.901387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:38.911639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:38.911663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:38.921849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:38.921876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:38.932429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:38.932454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:38.942689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:38.942742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:38.952735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:38.952762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:38.962995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:38.963034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:38.973255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:38.973280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:38.983216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:38.983241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:38.993672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:38.993696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:39.003818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:39.003845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:39.013793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:39.013818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:39.024204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:39.024230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:39.034356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:39.034380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:39.045051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.983 [2024-12-08 06:14:39.045076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.983 [2024-12-08 06:14:39.055652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.984 [2024-12-08 06:14:39.055677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.984 [2024-12-08 06:14:39.065895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.984 [2024-12-08 06:14:39.065920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.984 [2024-12-08 06:14:39.075852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.984 [2024-12-08 06:14:39.075881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.984 [2024-12-08 06:14:39.086098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.984 [2024-12-08 06:14:39.086124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.984 [2024-12-08 06:14:39.095776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.984 [2024-12-08 06:14:39.095802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.244 [2024-12-08 06:14:39.106221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.244 [2024-12-08 06:14:39.106247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.244 [2024-12-08 06:14:39.116490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.244 [2024-12-08 06:14:39.116514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.244 [2024-12-08 06:14:39.126604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.244 [2024-12-08 06:14:39.126629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.244 [2024-12-08 06:14:39.137321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.244 [2024-12-08 06:14:39.137346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.244 [2024-12-08 06:14:39.147970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.244 [2024-12-08 06:14:39.147995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.244 [2024-12-08 06:14:39.158013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.244 [2024-12-08 06:14:39.158053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.244 [2024-12-08 06:14:39.168050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.244 [2024-12-08 06:14:39.168089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.244 [2024-12-08 06:14:39.178260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.244 [2024-12-08 06:14:39.178286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.244 [2024-12-08 06:14:39.188931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.244 [2024-12-08 06:14:39.188958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.244 [2024-12-08 06:14:39.199308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.244 [2024-12-08 06:14:39.199333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.244 [2024-12-08 06:14:39.209817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.244 [2024-12-08 06:14:39.209844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.244 [2024-12-08 06:14:39.219829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.244 [2024-12-08 06:14:39.219855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.244 [2024-12-08 06:14:39.230041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.245 [2024-12-08 06:14:39.230081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.245 [2024-12-08 06:14:39.240186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.245 [2024-12-08 06:14:39.240211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.245 [2024-12-08 06:14:39.250286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.245 [2024-12-08 06:14:39.250310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.245 [2024-12-08 06:14:39.260455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.245 [2024-12-08 06:14:39.260480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.245 [2024-12-08 06:14:39.271160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.245 [2024-12-08 06:14:39.271185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.245 [2024-12-08 06:14:39.281512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.245 [2024-12-08 06:14:39.281537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.245 [2024-12-08 06:14:39.292170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.245 [2024-12-08 06:14:39.292195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.245 [2024-12-08 06:14:39.302418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.245 [2024-12-08 06:14:39.302442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.245 [2024-12-08 06:14:39.314349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.245 [2024-12-08 06:14:39.314374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.245 [2024-12-08 06:14:39.323646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.245 [2024-12-08 06:14:39.323669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.245 [2024-12-08 06:14:39.334127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.245 [2024-12-08 06:14:39.334152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.245 12272.67 IOPS, 95.88 MiB/s [2024-12-08T05:14:39.364Z] [2024-12-08 06:14:39.346132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.245 [2024-12-08 06:14:39.346164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.245 [2024-12-08 06:14:39.355944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.245 [2024-12-08 06:14:39.355969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.515 [2024-12-08 06:14:39.367245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.515 [2024-12-08 06:14:39.367271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.515 [2024-12-08 06:14:39.377451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.515 [2024-12-08 06:14:39.377475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.515 [2024-12-08 06:14:39.387381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.515 [2024-12-08 06:14:39.387405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.515 [2024-12-08 06:14:39.397536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.515 [2024-12-08 06:14:39.397559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.515 [2024-12-08 06:14:39.408140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.515 [2024-12-08 06:14:39.408164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.515 [2024-12-08 06:14:39.418644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.515 [2024-12-08 06:14:39.418667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.515 [2024-12-08 06:14:39.428986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.515 [2024-12-08 06:14:39.429025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.515 [2024-12-08 06:14:39.440209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.440235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.516 [2024-12-08 06:14:39.449914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.449940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.516 [2024-12-08 06:14:39.460256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.460279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.516 [2024-12-08 06:14:39.470637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.470661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.516 [2024-12-08 06:14:39.480830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.480855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.516 [2024-12-08 06:14:39.491259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.491283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.516 [2024-12-08 06:14:39.501572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.501595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.516 [2024-12-08 06:14:39.512242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.512266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.516 [2024-12-08 06:14:39.522434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.522458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.516 [2024-12-08 06:14:39.532636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.532660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.516 [2024-12-08 06:14:39.543339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.543376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.516 [2024-12-08 06:14:39.553969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.553994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.516 [2024-12-08 06:14:39.566442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.566466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.516 [2024-12-08 06:14:39.575687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.575735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.516 [2024-12-08 06:14:39.588368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.588392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.516 [2024-12-08 06:14:39.600054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.600091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.516 [2024-12-08 06:14:39.608762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.608788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.516 [2024-12-08 06:14:39.619993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.620032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.516 [2024-12-08 06:14:39.629699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.516 [2024-12-08 06:14:39.629748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.776 [2024-12-08 06:14:39.640515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.776 [2024-12-08 06:14:39.640541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.776 [2024-12-08 06:14:39.653165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.776 [2024-12-08 06:14:39.653189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.776 [2024-12-08 06:14:39.662881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.776 [2024-12-08 06:14:39.662907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.776 [2024-12-08 06:14:39.673125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.776 [2024-12-08 06:14:39.673150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.776 [2024-12-08 06:14:39.682787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.776 [2024-12-08 06:14:39.682812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.776 [2024-12-08 06:14:39.693486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.776 [2024-12-08 06:14:39.693510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.776 [2024-12-08 06:14:39.703430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.776 [2024-12-08 06:14:39.703454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.776 [2024-12-08 06:14:39.713636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.776 [2024-12-08 06:14:39.713660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.776 [2024-12-08 06:14:39.724132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.776 [2024-12-08 06:14:39.724156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.777 [2024-12-08 06:14:39.736040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.777 [2024-12-08 06:14:39.736079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.777 [2024-12-08 06:14:39.746023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.777 [2024-12-08 06:14:39.746057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.777 [2024-12-08 06:14:39.756134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.777 [2024-12-08 06:14:39.756158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.777 [2024-12-08 06:14:39.766526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.777 [2024-12-08 06:14:39.766550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.777 [2024-12-08 06:14:39.778783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.777 [2024-12-08 06:14:39.778809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.777 [2024-12-08 06:14:39.788060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.777 [2024-12-08 06:14:39.788085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.777 [2024-12-08 06:14:39.798312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.777 [2024-12-08 06:14:39.798336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.777 [2024-12-08 06:14:39.808538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.777 [2024-12-08 06:14:39.808572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.777 [2024-12-08 06:14:39.819326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.777 [2024-12-08 06:14:39.819350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.777 [2024-12-08 06:14:39.829519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.777 [2024-12-08 06:14:39.829543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.777 [2024-12-08 06:14:39.841483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.777 [2024-12-08 06:14:39.841507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.777 [2024-12-08 06:14:39.851342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.777 [2024-12-08 06:14:39.851366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.777 [2024-12-08 06:14:39.862049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.777 [2024-12-08 06:14:39.862074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.777 [2024-12-08 06:14:39.874147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.777 [2024-12-08 06:14:39.874172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.777 [2024-12-08 06:14:39.884135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.777 [2024-12-08 06:14:39.884160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.777 [2024-12-08 06:14:39.894518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.777 [2024-12-08 06:14:39.894542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:39.904980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:39.905022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:39.915117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:39.915141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:39.925103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:39.925128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:39.935341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:39.935365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:39.945966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:39.946002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:39.956103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:39.956129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:39.966143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:39.966169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:39.976282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:39.976307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:39.986905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:39.986931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:39.997333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:39.997357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:40.008958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:40.008988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:40.019916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:40.019945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:40.030598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:40.030637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:40.041923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:40.041968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:40.052609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:40.052636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:40.063035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:40.063075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:40.073457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:40.073482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:40.083807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:40.083834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:40.095192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:40.095217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:40.105624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:40.105648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:40.116342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:40.116366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:40.127284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:40.127309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:40.138093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:40.138117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.038 [2024-12-08 06:14:40.150472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.038 [2024-12-08 06:14:40.150496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.161114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.161141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.171281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.171306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.181589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.181613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.191683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.191732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.202235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.202261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.213075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.213100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.224210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.224250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.234919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.234945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.245314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.245339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.255874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.255900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.266366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.266391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.278944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.278970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.288737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.288763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.300078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.300103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.310482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.310507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.321481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.321505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.332295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.332318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.342826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.342852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 12246.50 IOPS, 95.68 MiB/s [2024-12-08T05:14:40.416Z] [2024-12-08 06:14:40.353386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.353410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.364071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.364097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.376247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.376271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.386300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.386324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.396880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.396905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.297 [2024-12-08 06:14:40.410136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.297 [2024-12-08 06:14:40.410161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.420196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.420221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.430509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.430534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.440530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.440554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.450736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.450762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.461902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.461929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.472142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.472167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.482274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.482300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.492759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.492786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.503110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.503135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.515384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.515408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.525957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.525983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.536442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.536466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.546663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.546697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.557166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.557190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.569867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.569894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.580401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.580426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.592085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.592110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.603045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.603084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.613718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.557 [2024-12-08 06:14:40.613755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.557 [2024-12-08 06:14:40.624603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.558 [2024-12-08 06:14:40.624628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.558 [2024-12-08 06:14:40.636788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.558 [2024-12-08 06:14:40.636814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.558 [2024-12-08 06:14:40.647131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.558 [2024-12-08 06:14:40.647156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.558 [2024-12-08 06:14:40.657212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.558 [2024-12-08 06:14:40.657236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.558 [2024-12-08 06:14:40.667516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.558 [2024-12-08 06:14:40.667539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.818 [2024-12-08 06:14:40.678113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.818 [2024-12-08 06:14:40.678139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.818 [2024-12-08 06:14:40.688749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.818 [2024-12-08 06:14:40.688775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.818 [2024-12-08 06:14:40.698994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.818 [2024-12-08 06:14:40.699033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.818 [2024-12-08 06:14:40.708985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.818 [2024-12-08 06:14:40.709025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.818 [2024-12-08 06:14:40.719868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.818 [2024-12-08 06:14:40.719894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.818 [2024-12-08 06:14:40.730577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.818 [2024-12-08 06:14:40.730601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.818 [2024-12-08 06:14:40.740836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.818 [2024-12-08 06:14:40.740861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.818 [2024-12-08 06:14:40.753177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.818 [2024-12-08 06:14:40.753211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.818 [2024-12-08 06:14:40.762645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.818 [2024-12-08 06:14:40.762669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.818 [2024-12-08 06:14:40.773167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.818 [2024-12-08 06:14:40.773192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.818 [2024-12-08 06:14:40.784089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.818 [2024-12-08 06:14:40.784113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.818 [2024-12-08 06:14:40.796564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.818 [2024-12-08 06:14:40.796588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.819 [2024-12-08 06:14:40.806432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.819 [2024-12-08 06:14:40.806456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.819 [2024-12-08 06:14:40.816923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.819 [2024-12-08 06:14:40.816949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.819 [2024-12-08 06:14:40.827034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.819 [2024-12-08 06:14:40.827060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.819 [2024-12-08 06:14:40.837119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.819 [2024-12-08 06:14:40.837143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.819 [2024-12-08 06:14:40.847902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.819 [2024-12-08 06:14:40.847928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.819 [2024-12-08 06:14:40.857953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.819 [2024-12-08 06:14:40.857979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.819 [2024-12-08 06:14:40.868041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.819 [2024-12-08 06:14:40.868081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.819 [2024-12-08 06:14:40.878341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.819 [2024-12-08 06:14:40.878365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.819 [2024-12-08 06:14:40.891029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.819 [2024-12-08 06:14:40.891056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.819 [2024-12-08 06:14:40.901281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.819 [2024-12-08 06:14:40.901306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.819 [2024-12-08 06:14:40.912127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.819 [2024-12-08 06:14:40.912152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.819 [2024-12-08 06:14:40.925339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.819 [2024-12-08 06:14:40.925364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.819 [2024-12-08 06:14:40.935486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.819 [2024-12-08 06:14:40.935510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.084 [2024-12-08 06:14:40.946363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.084 [2024-12-08 06:14:40.946389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.084 [2024-12-08 06:14:40.956570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.084 [2024-12-08 06:14:40.956605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.084 [2024-12-08 06:14:40.966598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.084 [2024-12-08 06:14:40.966622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.084 [2024-12-08 06:14:40.977745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:40.977772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:40.988060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:40.988100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:40.998236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:40.998260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.008590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.008614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.019171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.019196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.031663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.031688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.043296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.043321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.052267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.052291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.063142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.063167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.074807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.074833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.083879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.083905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.094406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.094431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.104659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.104684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.114560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.114586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.125375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.125401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.136250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.136274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.146170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.146195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.156519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.156552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.168560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.168584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.178188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.178213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.187884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.187911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.085 [2024-12-08 06:14:41.197926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.085 [2024-12-08 06:14:41.197952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.208152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.208178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.218719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.218754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.231383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.231408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.241572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.241596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.252314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.252339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.262425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.262449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.273153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.273177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.285116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.285150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.294812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.294838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.305143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.305168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.317138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.317163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.327049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.327089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.337556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.337580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.347850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.347876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 12232.40 IOPS, 95.57 MiB/s [2024-12-08T05:14:41.468Z] [2024-12-08 06:14:41.357624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.357648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 00:10:51.349 Latency(us) 00:10:51.349 [2024-12-08T05:14:41.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.349 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:51.349 Nvme1n1 : 5.01 12234.43 95.58 0.00 0.00 10448.77 4587.52 17185.00 00:10:51.349 [2024-12-08T05:14:41.468Z] =================================================================================================================== 00:10:51.349 [2024-12-08T05:14:41.468Z] Total : 12234.43 95.58 0.00 0.00 10448.77 4587.52 17185.00 00:10:51.349 [2024-12-08 06:14:41.362338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.362361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.370354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.370376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.378372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.378394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.386454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.386499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.394487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.394536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.402502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.402549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.410520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.410568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.418540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.418594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.426566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.426616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.434582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.434633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.442603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.442654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.450630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.450682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.458656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.458707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.349 [2024-12-08 06:14:41.466685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.349 [2024-12-08 06:14:41.466750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.609 [2024-12-08 06:14:41.474695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.609 [2024-12-08 06:14:41.474754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.609 [2024-12-08 06:14:41.482712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.609 [2024-12-08 06:14:41.482773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.609 [2024-12-08 06:14:41.490763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.609 [2024-12-08 06:14:41.490811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.609 [2024-12-08 06:14:41.498770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.609 [2024-12-08 06:14:41.498817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.609 [2024-12-08 06:14:41.506744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.609 [2024-12-08 06:14:41.506781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.609 [2024-12-08 06:14:41.514781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.609 [2024-12-08 06:14:41.514804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.609 [2024-12-08 06:14:41.522801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.609 [2024-12-08 06:14:41.522843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.609 [2024-12-08 06:14:41.530797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.609 [2024-12-08 06:14:41.530817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.609 [2024-12-08 06:14:41.538852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.609 [2024-12-08 06:14:41.538890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.609 [2024-12-08 06:14:41.546900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.609 [2024-12-08 06:14:41.546945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.609 [2024-12-08 06:14:41.554893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.609 [2024-12-08 06:14:41.554931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.609 [2024-12-08 06:14:41.562886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.609 [2024-12-08 06:14:41.562909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.609 [2024-12-08 06:14:41.570901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.609 [2024-12-08 06:14:41.570922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.609 [2024-12-08 06:14:41.578921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.609 [2024-12-08 06:14:41.578942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (986569) - No such process 00:10:51.609 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 986569 00:10:51.609 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.609 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.609 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:51.609 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.609 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:51.609 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.609 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:51.609 delay0 00:10:51.609 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.609 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:51.609 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.609 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:51.609 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.609 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:51.609 [2024-12-08 06:14:41.695568] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:59.738 Initializing NVMe Controllers 00:10:59.738 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:59.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:59.738 Initialization complete. Launching workers. 00:10:59.738 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3172 00:10:59.738 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3459, failed to submit 33 00:10:59.738 success 3284, unsuccessful 175, failed 0 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:59.738 rmmod nvme_tcp 00:10:59.738 rmmod nvme_fabrics 00:10:59.738 rmmod nvme_keyring 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 985224 ']' 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 985224 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 985224 ']' 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 985224 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 985224 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 985224' 00:10:59.738 killing process with pid 985224 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 985224 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 985224 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.738 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.679 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:00.679 00:11:00.679 real 0m28.571s 00:11:00.679 user 0m40.954s 00:11:00.679 sys 0m9.686s 00:11:00.679 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.679 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:00.679 ************************************ 00:11:00.679 END TEST nvmf_zcopy 00:11:00.679 ************************************ 00:11:00.679 06:14:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:00.679 06:14:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.679 06:14:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.679 06:14:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:00.679 ************************************ 00:11:00.679 START TEST nvmf_nmic 00:11:00.679 ************************************ 00:11:00.679 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:00.938 * Looking for test storage... 00:11:00.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:00.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.938 --rc genhtml_branch_coverage=1 00:11:00.938 --rc genhtml_function_coverage=1 00:11:00.938 --rc genhtml_legend=1 00:11:00.938 --rc geninfo_all_blocks=1 00:11:00.938 --rc geninfo_unexecuted_blocks=1 00:11:00.938 00:11:00.938 ' 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:00.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.938 --rc genhtml_branch_coverage=1 00:11:00.938 --rc genhtml_function_coverage=1 00:11:00.938 --rc genhtml_legend=1 00:11:00.938 --rc geninfo_all_blocks=1 00:11:00.938 --rc geninfo_unexecuted_blocks=1 00:11:00.938 00:11:00.938 ' 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:00.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.938 --rc genhtml_branch_coverage=1 00:11:00.938 --rc genhtml_function_coverage=1 00:11:00.938 --rc genhtml_legend=1 00:11:00.938 --rc geninfo_all_blocks=1 00:11:00.938 --rc geninfo_unexecuted_blocks=1 00:11:00.938 00:11:00.938 ' 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:00.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.938 --rc genhtml_branch_coverage=1 00:11:00.938 --rc genhtml_function_coverage=1 00:11:00.938 --rc genhtml_legend=1 00:11:00.938 --rc geninfo_all_blocks=1 00:11:00.938 --rc geninfo_unexecuted_blocks=1 00:11:00.938 00:11:00.938 ' 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.938 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:00.939 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:03.475 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:03.475 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:03.475 Found net devices under 0000:84:00.0: cvl_0_0 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:03.475 Found net devices under 0000:84:00.1: cvl_0_1 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:03.475 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:03.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:11:03.476 00:11:03.476 --- 10.0.0.2 ping statistics --- 00:11:03.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.476 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:03.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:11:03.476 00:11:03.476 --- 10.0.0.1 ping statistics --- 00:11:03.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.476 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=989991 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 989991 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 989991 ']' 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.476 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:03.476 [2024-12-08 06:14:53.412209] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:11:03.476 [2024-12-08 06:14:53.412321] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.476 [2024-12-08 06:14:53.487255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.476 [2024-12-08 06:14:53.547166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.476 [2024-12-08 06:14:53.547235] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.476 [2024-12-08 06:14:53.547263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.476 [2024-12-08 06:14:53.547275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.476 [2024-12-08 06:14:53.547285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.476 [2024-12-08 06:14:53.549105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.476 [2024-12-08 06:14:53.549173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.476 [2024-12-08 06:14:53.549233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.476 [2024-12-08 06:14:53.549236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.737 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.737 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:03.737 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:03.737 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:03.737 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:03.737 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.737 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:03.737 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.737 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:03.737 [2024-12-08 06:14:53.700745] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.737 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.737 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:03.737 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:03.738 Malloc0 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:03.738 [2024-12-08 06:14:53.775342] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:03.738 test case1: single bdev can't be used in multiple subsystems 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:03.738 [2024-12-08 06:14:53.799132] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:03.738 [2024-12-08 06:14:53.799161] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:03.738 [2024-12-08 06:14:53.799191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.738 request: 00:11:03.738 { 00:11:03.738 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:03.738 "namespace": { 00:11:03.738 "bdev_name": "Malloc0", 00:11:03.738 "no_auto_visible": false, 00:11:03.738 "hide_metadata": false 00:11:03.738 }, 00:11:03.738 "method": "nvmf_subsystem_add_ns", 00:11:03.738 "req_id": 1 00:11:03.738 } 00:11:03.738 Got JSON-RPC error response 00:11:03.738 response: 00:11:03.738 { 00:11:03.738 "code": -32602, 00:11:03.738 "message": "Invalid parameters" 00:11:03.738 } 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:03.738 Adding namespace failed - expected result. 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:03.738 test case2: host connect to nvmf target in multiple paths 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:03.738 [2024-12-08 06:14:53.811267] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.738 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:04.677 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:05.242 06:14:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.242 06:14:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:05.242 06:14:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.242 06:14:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:05.242 06:14:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:07.152 06:14:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:07.152 06:14:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:07.152 06:14:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.152 06:14:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:07.152 06:14:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.152 06:14:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:07.152 06:14:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:07.152 [global] 00:11:07.152 thread=1 00:11:07.152 invalidate=1 00:11:07.152 rw=write 00:11:07.152 time_based=1 00:11:07.152 runtime=1 00:11:07.152 ioengine=libaio 00:11:07.152 direct=1 00:11:07.152 bs=4096 00:11:07.152 iodepth=1 00:11:07.152 norandommap=0 00:11:07.152 numjobs=1 00:11:07.152 00:11:07.152 verify_dump=1 00:11:07.152 verify_backlog=512 00:11:07.152 verify_state_save=0 00:11:07.152 do_verify=1 00:11:07.152 verify=crc32c-intel 00:11:07.152 [job0] 00:11:07.152 filename=/dev/nvme0n1 00:11:07.152 Could not set queue depth (nvme0n1) 00:11:07.410 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.410 fio-3.35 00:11:07.410 Starting 1 thread 00:11:08.841 00:11:08.841 job0: (groupid=0, jobs=1): err= 0: pid=990628: Sun Dec 8 06:14:58 2024 00:11:08.841 read: IOPS=1029, BW=4120KiB/s (4219kB/s)(4132KiB/1003msec) 00:11:08.841 slat (nsec): min=6671, max=37635, avg=10184.80, stdev=4585.57 00:11:08.841 clat (usec): min=169, max=41062, avg=573.72, stdev=3785.46 00:11:08.841 lat (usec): min=177, max=41079, avg=583.91, stdev=3786.18 00:11:08.841 clat percentiles (usec): 00:11:08.841 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:11:08.841 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 210], 60.00th=[ 219], 00:11:08.841 | 70.00th=[ 239], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 277], 00:11:08.841 | 99.00th=[ 302], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:08.841 | 99.99th=[41157] 00:11:08.841 write: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec); 0 zone resets 00:11:08.841 slat (usec): min=8, max=34644, avg=84.13, stdev=1399.87 00:11:08.841 clat (usec): min=122, max=2306, avg=169.29, stdev=62.25 00:11:08.841 lat (usec): min=132, max=34855, avg=253.42, stdev=1403.16 00:11:08.841 clat percentiles (usec): 00:11:08.841 | 1.00th=[ 128], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 139], 00:11:08.841 | 30.00th=[ 145], 40.00th=[ 153], 50.00th=[ 165], 60.00th=[ 178], 00:11:08.841 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 206], 95.00th=[ 217], 00:11:08.841 | 99.00th=[ 247], 99.50th=[ 260], 99.90th=[ 445], 99.95th=[ 2311], 00:11:08.841 | 99.99th=[ 2311] 00:11:08.841 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=6144.00, stdev=2896.31, samples=2 00:11:08.841 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:11:08.841 lat (usec) : 250=90.66%, 500=8.95% 00:11:08.841 lat (msec) : 4=0.04%, 50=0.35% 00:11:08.841 cpu : usr=2.20%, sys=4.89%, ctx=2575, majf=0, minf=1 00:11:08.841 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.841 issued rwts: total=1033,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.841 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.841 00:11:08.841 Run status group 0 (all jobs): 00:11:08.841 READ: bw=4120KiB/s (4219kB/s), 4120KiB/s-4120KiB/s (4219kB/s-4219kB/s), io=4132KiB (4231kB), run=1003-1003msec 00:11:08.841 WRITE: bw=6126KiB/s (6273kB/s), 6126KiB/s-6126KiB/s (6273kB/s-6273kB/s), io=6144KiB (6291kB), run=1003-1003msec 00:11:08.841 00:11:08.841 Disk stats (read/write): 00:11:08.841 nvme0n1: ios=1075/1536, merge=0/0, ticks=1097/251, in_queue=1348, util=99.50% 00:11:08.841 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:08.841 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.842 rmmod nvme_tcp 00:11:08.842 rmmod nvme_fabrics 00:11:08.842 rmmod nvme_keyring 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 989991 ']' 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 989991 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 989991 ']' 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 989991 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 989991 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 989991' 00:11:08.842 killing process with pid 989991 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 989991 00:11:08.842 06:14:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 989991 00:11:09.111 06:14:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:09.111 06:14:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:09.111 06:14:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:09.111 06:14:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:09.111 06:14:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:09.111 06:14:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:09.111 06:14:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:09.111 06:14:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:09.111 06:14:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:09.111 06:14:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.111 06:14:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.111 06:14:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.016 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:11.016 00:11:11.016 real 0m10.318s 00:11:11.016 user 0m23.070s 00:11:11.016 sys 0m2.614s 00:11:11.016 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.016 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:11.016 ************************************ 00:11:11.016 END TEST nvmf_nmic 00:11:11.016 ************************************ 00:11:11.016 06:15:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:11.016 06:15:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:11.016 06:15:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.016 06:15:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:11.274 ************************************ 00:11:11.274 START TEST nvmf_fio_target 00:11:11.274 ************************************ 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:11.274 * Looking for test storage... 00:11:11.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.274 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:11.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.275 --rc genhtml_branch_coverage=1 00:11:11.275 --rc genhtml_function_coverage=1 00:11:11.275 --rc genhtml_legend=1 00:11:11.275 --rc geninfo_all_blocks=1 00:11:11.275 --rc geninfo_unexecuted_blocks=1 00:11:11.275 00:11:11.275 ' 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:11.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.275 --rc genhtml_branch_coverage=1 00:11:11.275 --rc genhtml_function_coverage=1 00:11:11.275 --rc genhtml_legend=1 00:11:11.275 --rc geninfo_all_blocks=1 00:11:11.275 --rc geninfo_unexecuted_blocks=1 00:11:11.275 00:11:11.275 ' 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:11.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.275 --rc genhtml_branch_coverage=1 00:11:11.275 --rc genhtml_function_coverage=1 00:11:11.275 --rc genhtml_legend=1 00:11:11.275 --rc geninfo_all_blocks=1 00:11:11.275 --rc geninfo_unexecuted_blocks=1 00:11:11.275 00:11:11.275 ' 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:11.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.275 --rc genhtml_branch_coverage=1 00:11:11.275 --rc genhtml_function_coverage=1 00:11:11.275 --rc genhtml_legend=1 00:11:11.275 --rc geninfo_all_blocks=1 00:11:11.275 --rc geninfo_unexecuted_blocks=1 00:11:11.275 00:11:11.275 ' 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:11.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:11.275 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:13.810 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:13.811 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:13.811 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:13.811 Found net devices under 0000:84:00.0: cvl_0_0 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:13.811 Found net devices under 0000:84:00.1: cvl_0_1 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:13.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:11:13.811 00:11:13.811 --- 10.0.0.2 ping statistics --- 00:11:13.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.811 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:13.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:11:13.811 00:11:13.811 --- 10.0.0.1 ping statistics --- 00:11:13.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.811 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:13.811 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.812 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:13.812 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:13.812 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:13.812 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:13.812 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:13.812 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.812 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=992852 00:11:13.812 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:13.812 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 992852 00:11:13.812 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 992852 ']' 00:11:13.812 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.812 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.812 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.812 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.812 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.812 [2024-12-08 06:15:03.704054] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:11:13.812 [2024-12-08 06:15:03.704166] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.812 [2024-12-08 06:15:03.781503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.812 [2024-12-08 06:15:03.842669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.812 [2024-12-08 06:15:03.842822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.812 [2024-12-08 06:15:03.842848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.812 [2024-12-08 06:15:03.842863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.812 [2024-12-08 06:15:03.842873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.812 [2024-12-08 06:15:03.844681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.812 [2024-12-08 06:15:03.844812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.812 [2024-12-08 06:15:03.844842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.812 [2024-12-08 06:15:03.844845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.071 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.071 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:14.071 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:14.071 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:14.071 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.071 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.071 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:14.329 [2024-12-08 06:15:04.265381] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.329 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:14.588 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:14.588 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:14.846 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:14.846 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:15.116 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:15.116 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:15.377 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:15.377 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:15.634 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:16.203 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:16.203 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:16.203 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:16.203 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:16.769 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:16.769 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:16.769 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:17.026 06:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:17.026 06:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:17.345 06:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:17.345 06:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:17.603 06:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.860 [2024-12-08 06:15:07.971630] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.118 06:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:18.375 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:18.633 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:19.198 06:15:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:19.198 06:15:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:19.198 06:15:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:19.198 06:15:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:19.198 06:15:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:19.198 06:15:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:21.728 06:15:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:21.728 06:15:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:21.728 06:15:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:21.728 06:15:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:21.728 06:15:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:21.728 06:15:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:21.728 06:15:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:21.728 [global] 00:11:21.728 thread=1 00:11:21.728 invalidate=1 00:11:21.728 rw=write 00:11:21.728 time_based=1 00:11:21.728 runtime=1 00:11:21.728 ioengine=libaio 00:11:21.728 direct=1 00:11:21.728 bs=4096 00:11:21.728 iodepth=1 00:11:21.728 norandommap=0 00:11:21.728 numjobs=1 00:11:21.728 00:11:21.728 verify_dump=1 00:11:21.728 verify_backlog=512 00:11:21.728 verify_state_save=0 00:11:21.728 do_verify=1 00:11:21.728 verify=crc32c-intel 00:11:21.728 [job0] 00:11:21.728 filename=/dev/nvme0n1 00:11:21.728 [job1] 00:11:21.728 filename=/dev/nvme0n2 00:11:21.728 [job2] 00:11:21.728 filename=/dev/nvme0n3 00:11:21.728 [job3] 00:11:21.728 filename=/dev/nvme0n4 00:11:21.728 Could not set queue depth (nvme0n1) 00:11:21.728 Could not set queue depth (nvme0n2) 00:11:21.728 Could not set queue depth (nvme0n3) 00:11:21.728 Could not set queue depth (nvme0n4) 00:11:21.728 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.728 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.728 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.728 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.728 fio-3.35 00:11:21.728 Starting 4 threads 00:11:22.669 00:11:22.669 job0: (groupid=0, jobs=1): err= 0: pid=994436: Sun Dec 8 06:15:12 2024 00:11:22.669 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:22.669 slat (nsec): min=7113, max=73670, avg=15141.30, stdev=6696.08 00:11:22.669 clat (usec): min=174, max=40485, avg=348.52, stdev=1122.28 00:11:22.669 lat (usec): min=182, max=40493, avg=363.66, stdev=1122.35 00:11:22.669 clat percentiles (usec): 00:11:22.669 | 1.00th=[ 190], 5.00th=[ 204], 10.00th=[ 219], 20.00th=[ 241], 00:11:22.669 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 289], 00:11:22.669 | 70.00th=[ 322], 80.00th=[ 392], 90.00th=[ 478], 95.00th=[ 529], 00:11:22.669 | 99.00th=[ 578], 99.50th=[ 586], 99.90th=[17957], 99.95th=[40633], 00:11:22.669 | 99.99th=[40633] 00:11:22.669 write: IOPS=1956, BW=7824KiB/s (8012kB/s)(7832KiB/1001msec); 0 zone resets 00:11:22.669 slat (nsec): min=7756, max=89024, avg=16735.16, stdev=7469.63 00:11:22.669 clat (usec): min=130, max=889, avg=200.49, stdev=41.16 00:11:22.669 lat (usec): min=141, max=918, avg=217.22, stdev=43.70 00:11:22.669 clat percentiles (usec): 00:11:22.669 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 165], 00:11:22.669 | 30.00th=[ 188], 40.00th=[ 196], 50.00th=[ 204], 60.00th=[ 210], 00:11:22.669 | 70.00th=[ 219], 80.00th=[ 227], 90.00th=[ 239], 95.00th=[ 249], 00:11:22.669 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 865], 99.95th=[ 889], 00:11:22.669 | 99.99th=[ 889] 00:11:22.669 bw ( KiB/s): min= 7568, max= 7568, per=34.81%, avg=7568.00, stdev= 0.00, samples=1 00:11:22.669 iops : min= 1892, max= 1892, avg=1892.00, stdev= 0.00, samples=1 00:11:22.669 lat (usec) : 250=65.74%, 500=31.02%, 750=3.12%, 1000=0.06% 00:11:22.669 lat (msec) : 20=0.03%, 50=0.03% 00:11:22.669 cpu : usr=3.80%, sys=7.90%, ctx=3494, majf=0, minf=2 00:11:22.669 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.669 issued rwts: total=1536,1958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.669 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.669 job1: (groupid=0, jobs=1): err= 0: pid=994437: Sun Dec 8 06:15:12 2024 00:11:22.669 read: IOPS=603, BW=2414KiB/s (2472kB/s)(2416KiB/1001msec) 00:11:22.669 slat (nsec): min=6900, max=52321, avg=17494.75, stdev=6126.31 00:11:22.669 clat (usec): min=200, max=41185, avg=1241.11, stdev=6110.59 00:11:22.669 lat (usec): min=209, max=41211, avg=1258.61, stdev=6110.77 00:11:22.669 clat percentiles (usec): 00:11:22.669 | 1.00th=[ 217], 5.00th=[ 251], 10.00th=[ 262], 20.00th=[ 269], 00:11:22.669 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 297], 00:11:22.669 | 70.00th=[ 306], 80.00th=[ 322], 90.00th=[ 375], 95.00th=[ 457], 00:11:22.669 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:22.669 | 99.99th=[41157] 00:11:22.669 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:22.669 slat (usec): min=8, max=23280, avg=37.42, stdev=727.09 00:11:22.669 clat (usec): min=135, max=894, avg=190.40, stdev=41.65 00:11:22.669 lat (usec): min=145, max=23535, avg=227.82, stdev=730.40 00:11:22.669 clat percentiles (usec): 00:11:22.669 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:11:22.669 | 30.00th=[ 167], 40.00th=[ 176], 50.00th=[ 186], 60.00th=[ 198], 00:11:22.669 | 70.00th=[ 206], 80.00th=[ 219], 90.00th=[ 237], 95.00th=[ 247], 00:11:22.669 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 635], 99.95th=[ 898], 00:11:22.669 | 99.99th=[ 898] 00:11:22.669 bw ( KiB/s): min= 4096, max= 4096, per=18.84%, avg=4096.00, stdev= 0.00, samples=1 00:11:22.669 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:22.669 lat (usec) : 250=62.35%, 500=36.36%, 750=0.37%, 1000=0.06% 00:11:22.669 lat (msec) : 50=0.86% 00:11:22.669 cpu : usr=1.70%, sys=3.40%, ctx=1632, majf=0, minf=1 00:11:22.669 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.669 issued rwts: total=604,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.669 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.669 job2: (groupid=0, jobs=1): err= 0: pid=994438: Sun Dec 8 06:15:12 2024 00:11:22.669 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:22.669 slat (nsec): min=6348, max=82789, avg=15725.95, stdev=8586.65 00:11:22.669 clat (usec): min=189, max=41976, avg=610.64, stdev=3216.38 00:11:22.669 lat (usec): min=198, max=41993, avg=626.37, stdev=3217.07 00:11:22.669 clat percentiles (usec): 00:11:22.669 | 1.00th=[ 204], 5.00th=[ 219], 10.00th=[ 231], 20.00th=[ 258], 00:11:22.669 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 322], 60.00th=[ 347], 00:11:22.669 | 70.00th=[ 383], 80.00th=[ 457], 90.00th=[ 515], 95.00th=[ 545], 00:11:22.669 | 99.00th=[ 586], 99.50th=[40633], 99.90th=[41157], 99.95th=[42206], 00:11:22.669 | 99.99th=[42206] 00:11:22.669 write: IOPS=1520, BW=6082KiB/s (6228kB/s)(6088KiB/1001msec); 0 zone resets 00:11:22.669 slat (usec): min=7, max=108, avg=15.99, stdev= 8.42 00:11:22.669 clat (usec): min=146, max=805, avg=212.13, stdev=39.91 00:11:22.669 lat (usec): min=157, max=818, avg=228.12, stdev=43.63 00:11:22.669 clat percentiles (usec): 00:11:22.669 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 176], 00:11:22.669 | 30.00th=[ 190], 40.00th=[ 204], 50.00th=[ 212], 60.00th=[ 221], 00:11:22.669 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 253], 95.00th=[ 273], 00:11:22.669 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 400], 99.95th=[ 807], 00:11:22.669 | 99.99th=[ 807] 00:11:22.669 bw ( KiB/s): min= 8192, max= 8192, per=37.68%, avg=8192.00, stdev= 0.00, samples=1 00:11:22.669 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:22.669 lat (usec) : 250=60.21%, 500=34.80%, 750=4.67%, 1000=0.04% 00:11:22.669 lat (msec) : 50=0.27% 00:11:22.669 cpu : usr=2.20%, sys=6.00%, ctx=2547, majf=0, minf=1 00:11:22.669 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.669 issued rwts: total=1024,1522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.669 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.669 job3: (groupid=0, jobs=1): err= 0: pid=994439: Sun Dec 8 06:15:12 2024 00:11:22.669 read: IOPS=518, BW=2073KiB/s (2123kB/s)(2108KiB/1017msec) 00:11:22.669 slat (nsec): min=6477, max=58693, avg=12803.37, stdev=7550.70 00:11:22.669 clat (usec): min=200, max=42410, avg=1470.30, stdev=6840.98 00:11:22.669 lat (usec): min=207, max=42426, avg=1483.11, stdev=6843.35 00:11:22.669 clat percentiles (usec): 00:11:22.669 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 237], 20.00th=[ 258], 00:11:22.670 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 306], 00:11:22.670 | 70.00th=[ 318], 80.00th=[ 355], 90.00th=[ 396], 95.00th=[ 429], 00:11:22.670 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:22.670 | 99.99th=[42206] 00:11:22.670 write: IOPS=1006, BW=4028KiB/s (4124kB/s)(4096KiB/1017msec); 0 zone resets 00:11:22.670 slat (nsec): min=7818, max=80707, avg=17310.24, stdev=8922.73 00:11:22.670 clat (usec): min=131, max=3150, avg=205.60, stdev=96.81 00:11:22.670 lat (usec): min=163, max=3159, avg=222.91, stdev=98.13 00:11:22.670 clat percentiles (usec): 00:11:22.670 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 178], 00:11:22.670 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 206], 00:11:22.670 | 70.00th=[ 217], 80.00th=[ 229], 90.00th=[ 245], 95.00th=[ 260], 00:11:22.670 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 326], 99.95th=[ 3163], 00:11:22.670 | 99.99th=[ 3163] 00:11:22.670 bw ( KiB/s): min= 8192, max= 8192, per=37.68%, avg=8192.00, stdev= 0.00, samples=1 00:11:22.670 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:22.670 lat (usec) : 250=66.15%, 500=32.82% 00:11:22.670 lat (msec) : 4=0.06%, 50=0.97% 00:11:22.670 cpu : usr=1.18%, sys=3.05%, ctx=1552, majf=0, minf=1 00:11:22.670 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.670 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.670 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.670 00:11:22.670 Run status group 0 (all jobs): 00:11:22.670 READ: bw=14.2MiB/s (14.9MB/s), 2073KiB/s-6138KiB/s (2123kB/s-6285kB/s), io=14.4MiB (15.1MB), run=1001-1017msec 00:11:22.670 WRITE: bw=21.2MiB/s (22.3MB/s), 4028KiB/s-7824KiB/s (4124kB/s-8012kB/s), io=21.6MiB (22.6MB), run=1001-1017msec 00:11:22.670 00:11:22.670 Disk stats (read/write): 00:11:22.670 nvme0n1: ios=1295/1536, merge=0/0, ticks=454/315, in_queue=769, util=86.77% 00:11:22.670 nvme0n2: ios=564/1023, merge=0/0, ticks=1211/190, in_queue=1401, util=94.00% 00:11:22.670 nvme0n3: ios=963/1024, merge=0/0, ticks=647/229, in_queue=876, util=95.21% 00:11:22.670 nvme0n4: ios=580/1024, merge=0/0, ticks=640/200, in_queue=840, util=91.70% 00:11:22.670 06:15:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:22.670 [global] 00:11:22.670 thread=1 00:11:22.670 invalidate=1 00:11:22.670 rw=randwrite 00:11:22.670 time_based=1 00:11:22.670 runtime=1 00:11:22.670 ioengine=libaio 00:11:22.670 direct=1 00:11:22.670 bs=4096 00:11:22.670 iodepth=1 00:11:22.670 norandommap=0 00:11:22.670 numjobs=1 00:11:22.670 00:11:22.670 verify_dump=1 00:11:22.670 verify_backlog=512 00:11:22.670 verify_state_save=0 00:11:22.670 do_verify=1 00:11:22.670 verify=crc32c-intel 00:11:22.670 [job0] 00:11:22.670 filename=/dev/nvme0n1 00:11:22.670 [job1] 00:11:22.670 filename=/dev/nvme0n2 00:11:22.670 [job2] 00:11:22.670 filename=/dev/nvme0n3 00:11:22.670 [job3] 00:11:22.670 filename=/dev/nvme0n4 00:11:22.670 Could not set queue depth (nvme0n1) 00:11:22.670 Could not set queue depth (nvme0n2) 00:11:22.670 Could not set queue depth (nvme0n3) 00:11:22.670 Could not set queue depth (nvme0n4) 00:11:22.930 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.930 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.930 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.930 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.930 fio-3.35 00:11:22.930 Starting 4 threads 00:11:24.332 00:11:24.332 job0: (groupid=0, jobs=1): err= 0: pid=994787: Sun Dec 8 06:15:14 2024 00:11:24.332 read: IOPS=20, BW=82.3KiB/s (84.2kB/s)(84.0KiB/1021msec) 00:11:24.332 slat (nsec): min=10488, max=31510, avg=21946.48, stdev=8188.25 00:11:24.332 clat (usec): min=40618, max=41062, avg=40949.50, stdev=92.05 00:11:24.332 lat (usec): min=40629, max=41077, avg=40971.45, stdev=91.31 00:11:24.332 clat percentiles (usec): 00:11:24.332 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:24.332 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:24.332 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:24.332 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:24.332 | 99.99th=[41157] 00:11:24.332 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:11:24.332 slat (nsec): min=7636, max=65434, avg=17204.93, stdev=8244.64 00:11:24.332 clat (usec): min=141, max=561, avg=289.92, stdev=63.39 00:11:24.332 lat (usec): min=160, max=578, avg=307.12, stdev=62.71 00:11:24.332 clat percentiles (usec): 00:11:24.332 | 1.00th=[ 176], 5.00th=[ 190], 10.00th=[ 208], 20.00th=[ 247], 00:11:24.332 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 297], 00:11:24.332 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 367], 95.00th=[ 408], 00:11:24.332 | 99.00th=[ 506], 99.50th=[ 515], 99.90th=[ 562], 99.95th=[ 562], 00:11:24.332 | 99.99th=[ 562] 00:11:24.332 bw ( KiB/s): min= 4096, max= 4096, per=29.31%, avg=4096.00, stdev= 0.00, samples=1 00:11:24.332 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:24.332 lat (usec) : 250=20.26%, 500=74.67%, 750=1.13% 00:11:24.332 lat (msec) : 50=3.94% 00:11:24.332 cpu : usr=0.78%, sys=1.08%, ctx=533, majf=0, minf=2 00:11:24.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:24.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.332 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:24.332 job1: (groupid=0, jobs=1): err= 0: pid=994788: Sun Dec 8 06:15:14 2024 00:11:24.332 read: IOPS=25, BW=101KiB/s (104kB/s)(104KiB/1026msec) 00:11:24.332 slat (nsec): min=10873, max=51570, avg=22291.31, stdev=9605.13 00:11:24.332 clat (usec): min=283, max=41272, avg=33116.21, stdev=16293.04 00:11:24.332 lat (usec): min=303, max=41293, avg=33138.50, stdev=16292.58 00:11:24.332 clat percentiles (usec): 00:11:24.332 | 1.00th=[ 285], 5.00th=[ 306], 10.00th=[ 322], 20.00th=[40633], 00:11:24.332 | 30.00th=[40633], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:11:24.332 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:24.332 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:24.332 | 99.99th=[41157] 00:11:24.332 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:11:24.332 slat (nsec): min=9276, max=52423, avg=19305.85, stdev=9329.35 00:11:24.332 clat (usec): min=160, max=483, avg=294.94, stdev=53.11 00:11:24.332 lat (usec): min=172, max=505, avg=314.25, stdev=56.11 00:11:24.332 clat percentiles (usec): 00:11:24.332 | 1.00th=[ 172], 5.00th=[ 212], 10.00th=[ 241], 20.00th=[ 262], 00:11:24.332 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 297], 00:11:24.332 | 70.00th=[ 310], 80.00th=[ 326], 90.00th=[ 363], 95.00th=[ 396], 00:11:24.332 | 99.00th=[ 461], 99.50th=[ 469], 99.90th=[ 486], 99.95th=[ 486], 00:11:24.332 | 99.99th=[ 486] 00:11:24.332 bw ( KiB/s): min= 4096, max= 4096, per=29.31%, avg=4096.00, stdev= 0.00, samples=1 00:11:24.332 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:24.332 lat (usec) : 250=13.01%, 500=83.09% 00:11:24.332 lat (msec) : 50=3.90% 00:11:24.332 cpu : usr=0.68%, sys=1.27%, ctx=541, majf=0, minf=1 00:11:24.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:24.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.332 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:24.332 job2: (groupid=0, jobs=1): err= 0: pid=994789: Sun Dec 8 06:15:14 2024 00:11:24.333 read: IOPS=1528, BW=6115KiB/s (6262kB/s)(6176KiB/1010msec) 00:11:24.333 slat (nsec): min=5228, max=70215, avg=19409.48, stdev=11585.30 00:11:24.333 clat (usec): min=184, max=41279, avg=356.36, stdev=1069.04 00:11:24.333 lat (usec): min=191, max=41288, avg=375.77, stdev=1069.51 00:11:24.333 clat percentiles (usec): 00:11:24.333 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 227], 00:11:24.333 | 30.00th=[ 245], 40.00th=[ 273], 50.00th=[ 306], 60.00th=[ 355], 00:11:24.333 | 70.00th=[ 392], 80.00th=[ 420], 90.00th=[ 453], 95.00th=[ 482], 00:11:24.333 | 99.00th=[ 594], 99.50th=[ 758], 99.90th=[ 8586], 99.95th=[41157], 00:11:24.333 | 99.99th=[41157] 00:11:24.333 write: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec); 0 zone resets 00:11:24.333 slat (nsec): min=6673, max=86091, avg=14480.76, stdev=8762.84 00:11:24.333 clat (usec): min=129, max=452, avg=186.46, stdev=45.03 00:11:24.333 lat (usec): min=137, max=475, avg=200.94, stdev=49.48 00:11:24.333 clat percentiles (usec): 00:11:24.333 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 151], 00:11:24.333 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 176], 60.00th=[ 184], 00:11:24.333 | 70.00th=[ 192], 80.00th=[ 208], 90.00th=[ 253], 95.00th=[ 289], 00:11:24.333 | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 400], 99.95th=[ 408], 00:11:24.333 | 99.99th=[ 453] 00:11:24.333 bw ( KiB/s): min= 8192, max= 8192, per=58.63%, avg=8192.00, stdev= 0.00, samples=2 00:11:24.333 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:11:24.333 lat (usec) : 250=65.42%, 500=33.55%, 750=0.81%, 1000=0.14% 00:11:24.333 lat (msec) : 4=0.03%, 10=0.03%, 50=0.03% 00:11:24.333 cpu : usr=3.67%, sys=5.85%, ctx=3592, majf=0, minf=2 00:11:24.333 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:24.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.333 issued rwts: total=1544,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.333 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:24.333 job3: (groupid=0, jobs=1): err= 0: pid=994791: Sun Dec 8 06:15:14 2024 00:11:24.333 read: IOPS=103, BW=412KiB/s (422kB/s)(416KiB/1009msec) 00:11:24.333 slat (nsec): min=5902, max=36248, avg=11602.95, stdev=7454.21 00:11:24.333 clat (usec): min=189, max=41443, avg=8119.08, stdev=16049.98 00:11:24.333 lat (usec): min=205, max=41469, avg=8130.69, stdev=16055.18 00:11:24.333 clat percentiles (usec): 00:11:24.333 | 1.00th=[ 192], 5.00th=[ 227], 10.00th=[ 239], 20.00th=[ 253], 00:11:24.333 | 30.00th=[ 293], 40.00th=[ 334], 50.00th=[ 355], 60.00th=[ 371], 00:11:24.333 | 70.00th=[ 400], 80.00th=[ 494], 90.00th=[41157], 95.00th=[41157], 00:11:24.333 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:24.333 | 99.99th=[41681] 00:11:24.333 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:11:24.333 slat (usec): min=8, max=125, avg=21.18, stdev=12.52 00:11:24.333 clat (usec): min=173, max=589, avg=289.94, stdev=54.15 00:11:24.333 lat (usec): min=190, max=613, avg=311.12, stdev=54.48 00:11:24.333 clat percentiles (usec): 00:11:24.333 | 1.00th=[ 182], 5.00th=[ 212], 10.00th=[ 235], 20.00th=[ 253], 00:11:24.333 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 293], 00:11:24.333 | 70.00th=[ 306], 80.00th=[ 322], 90.00th=[ 359], 95.00th=[ 392], 00:11:24.333 | 99.00th=[ 486], 99.50th=[ 515], 99.90th=[ 594], 99.95th=[ 594], 00:11:24.333 | 99.99th=[ 594] 00:11:24.333 bw ( KiB/s): min= 4096, max= 4096, per=29.31%, avg=4096.00, stdev= 0.00, samples=1 00:11:24.333 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:24.333 lat (usec) : 250=17.37%, 500=78.73%, 750=0.65% 00:11:24.333 lat (msec) : 50=3.25% 00:11:24.333 cpu : usr=0.89%, sys=1.19%, ctx=618, majf=0, minf=1 00:11:24.333 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:24.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.333 issued rwts: total=104,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.333 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:24.333 00:11:24.333 Run status group 0 (all jobs): 00:11:24.333 READ: bw=6608KiB/s (6767kB/s), 82.3KiB/s-6115KiB/s (84.2kB/s-6262kB/s), io=6780KiB (6943kB), run=1009-1026msec 00:11:24.333 WRITE: bw=13.6MiB/s (14.3MB/s), 1996KiB/s-8111KiB/s (2044kB/s-8306kB/s), io=14.0MiB (14.7MB), run=1009-1026msec 00:11:24.333 00:11:24.333 Disk stats (read/write): 00:11:24.333 nvme0n1: ios=58/512, merge=0/0, ticks=725/143, in_queue=868, util=86.87% 00:11:24.333 nvme0n2: ios=70/512, merge=0/0, ticks=1134/143, in_queue=1277, util=98.07% 00:11:24.333 nvme0n3: ios=1536/1539, merge=0/0, ticks=467/253, in_queue=720, util=88.94% 00:11:24.333 nvme0n4: ios=52/512, merge=0/0, ticks=1086/132, in_queue=1218, util=98.84% 00:11:24.333 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:24.333 [global] 00:11:24.333 thread=1 00:11:24.333 invalidate=1 00:11:24.333 rw=write 00:11:24.333 time_based=1 00:11:24.333 runtime=1 00:11:24.333 ioengine=libaio 00:11:24.333 direct=1 00:11:24.333 bs=4096 00:11:24.333 iodepth=128 00:11:24.333 norandommap=0 00:11:24.333 numjobs=1 00:11:24.333 00:11:24.333 verify_dump=1 00:11:24.333 verify_backlog=512 00:11:24.333 verify_state_save=0 00:11:24.333 do_verify=1 00:11:24.333 verify=crc32c-intel 00:11:24.333 [job0] 00:11:24.333 filename=/dev/nvme0n1 00:11:24.333 [job1] 00:11:24.333 filename=/dev/nvme0n2 00:11:24.333 [job2] 00:11:24.333 filename=/dev/nvme0n3 00:11:24.333 [job3] 00:11:24.333 filename=/dev/nvme0n4 00:11:24.333 Could not set queue depth (nvme0n1) 00:11:24.333 Could not set queue depth (nvme0n2) 00:11:24.333 Could not set queue depth (nvme0n3) 00:11:24.333 Could not set queue depth (nvme0n4) 00:11:24.333 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.333 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.333 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.333 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.333 fio-3.35 00:11:24.333 Starting 4 threads 00:11:25.753 00:11:25.753 job0: (groupid=0, jobs=1): err= 0: pid=995017: Sun Dec 8 06:15:15 2024 00:11:25.753 read: IOPS=3253, BW=12.7MiB/s (13.3MB/s)(13.3MiB/1043msec) 00:11:25.753 slat (usec): min=3, max=12501, avg=171.32, stdev=1050.84 00:11:25.753 clat (usec): min=8492, max=53248, avg=23711.13, stdev=12600.41 00:11:25.753 lat (usec): min=8689, max=53254, avg=23882.45, stdev=12631.02 00:11:25.753 clat percentiles (usec): 00:11:25.753 | 1.00th=[ 9503], 5.00th=[11076], 10.00th=[11338], 20.00th=[11731], 00:11:25.753 | 30.00th=[11994], 40.00th=[13435], 50.00th=[20579], 60.00th=[26346], 00:11:25.753 | 70.00th=[30802], 80.00th=[35914], 90.00th=[43779], 95.00th=[46924], 00:11:25.753 | 99.00th=[50070], 99.50th=[52691], 99.90th=[52691], 99.95th=[53216], 00:11:25.753 | 99.99th=[53216] 00:11:25.753 write: IOPS=3436, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1043msec); 0 zone resets 00:11:25.753 slat (usec): min=4, max=23178, avg=107.69, stdev=650.76 00:11:25.753 clat (usec): min=8628, max=38787, avg=13673.06, stdev=3695.76 00:11:25.753 lat (usec): min=8636, max=38810, avg=13780.74, stdev=3695.98 00:11:25.753 clat percentiles (usec): 00:11:25.753 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[11338], 00:11:25.753 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12387], 60.00th=[14353], 00:11:25.753 | 70.00th=[14746], 80.00th=[15926], 90.00th=[17433], 95.00th=[19006], 00:11:25.753 | 99.00th=[22676], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:11:25.753 | 99.99th=[38536] 00:11:25.753 bw ( KiB/s): min=12288, max=16384, per=24.79%, avg=14336.00, stdev=2896.31, samples=2 00:11:25.753 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:11:25.753 lat (msec) : 10=4.99%, 20=68.20%, 50=26.39%, 100=0.43% 00:11:25.753 cpu : usr=3.55%, sys=6.62%, ctx=250, majf=0, minf=1 00:11:25.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:25.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.753 issued rwts: total=3393,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.753 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.753 job1: (groupid=0, jobs=1): err= 0: pid=995018: Sun Dec 8 06:15:15 2024 00:11:25.753 read: IOPS=3768, BW=14.7MiB/s (15.4MB/s)(14.9MiB/1014msec) 00:11:25.753 slat (usec): min=2, max=17352, avg=119.67, stdev=876.43 00:11:25.753 clat (usec): min=5596, max=54297, avg=15324.59, stdev=7693.48 00:11:25.753 lat (usec): min=5603, max=54331, avg=15444.26, stdev=7753.92 00:11:25.753 clat percentiles (usec): 00:11:25.753 | 1.00th=[ 5800], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[10814], 00:11:25.753 | 30.00th=[11863], 40.00th=[11994], 50.00th=[13435], 60.00th=[13960], 00:11:25.753 | 70.00th=[14746], 80.00th=[16909], 90.00th=[25822], 95.00th=[30802], 00:11:25.753 | 99.00th=[49021], 99.50th=[49546], 99.90th=[49546], 99.95th=[49546], 00:11:25.753 | 99.99th=[54264] 00:11:25.753 write: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec); 0 zone resets 00:11:25.753 slat (usec): min=3, max=19569, avg=118.28, stdev=649.30 00:11:25.753 clat (usec): min=2494, max=51480, avg=17106.62, stdev=8720.02 00:11:25.753 lat (usec): min=2502, max=51488, avg=17224.90, stdev=8789.72 00:11:25.753 clat percentiles (usec): 00:11:25.754 | 1.00th=[ 3654], 5.00th=[ 6980], 10.00th=[ 9372], 20.00th=[10814], 00:11:25.754 | 30.00th=[11338], 40.00th=[11600], 50.00th=[13829], 60.00th=[20317], 00:11:25.754 | 70.00th=[20579], 80.00th=[21365], 90.00th=[26608], 95.00th=[34866], 00:11:25.754 | 99.00th=[46924], 99.50th=[49546], 99.90th=[51643], 99.95th=[51643], 00:11:25.754 | 99.99th=[51643] 00:11:25.754 bw ( KiB/s): min=15632, max=17170, per=28.36%, avg=16401.00, stdev=1087.53, samples=2 00:11:25.754 iops : min= 3908, max= 4292, avg=4100.00, stdev=271.53, samples=2 00:11:25.754 lat (msec) : 4=0.58%, 10=13.19%, 20=57.17%, 50=28.87%, 100=0.19% 00:11:25.754 cpu : usr=5.43%, sys=6.42%, ctx=433, majf=0, minf=1 00:11:25.754 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:25.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.754 issued rwts: total=3821,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.754 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.754 job2: (groupid=0, jobs=1): err= 0: pid=995019: Sun Dec 8 06:15:15 2024 00:11:25.754 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:11:25.754 slat (usec): min=3, max=7751, avg=112.90, stdev=624.84 00:11:25.754 clat (usec): min=7870, max=25906, avg=14089.62, stdev=2668.94 00:11:25.754 lat (usec): min=7885, max=25943, avg=14202.52, stdev=2718.40 00:11:25.754 clat percentiles (usec): 00:11:25.754 | 1.00th=[ 8586], 5.00th=[ 9896], 10.00th=[10945], 20.00th=[12125], 00:11:25.754 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13698], 60.00th=[14746], 00:11:25.754 | 70.00th=[15270], 80.00th=[16450], 90.00th=[17957], 95.00th=[18220], 00:11:25.754 | 99.00th=[21103], 99.50th=[22152], 99.90th=[25297], 99.95th=[25560], 00:11:25.754 | 99.99th=[25822] 00:11:25.754 write: IOPS=3792, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1006msec); 0 zone resets 00:11:25.754 slat (usec): min=4, max=31097, avg=145.97, stdev=901.49 00:11:25.754 clat (usec): min=5110, max=78777, avg=19994.00, stdev=10420.40 00:11:25.754 lat (usec): min=5793, max=78828, avg=20139.97, stdev=10494.30 00:11:25.754 clat percentiles (usec): 00:11:25.754 | 1.00th=[ 7570], 5.00th=[10945], 10.00th=[12125], 20.00th=[12780], 00:11:25.754 | 30.00th=[12911], 40.00th=[13304], 50.00th=[14615], 60.00th=[19792], 00:11:25.754 | 70.00th=[23725], 80.00th=[28443], 90.00th=[32113], 95.00th=[35390], 00:11:25.754 | 99.00th=[65799], 99.50th=[65799], 99.90th=[66323], 99.95th=[72877], 00:11:25.754 | 99.99th=[79168] 00:11:25.754 bw ( KiB/s): min=12288, max=17216, per=25.51%, avg=14752.00, stdev=3484.62, samples=2 00:11:25.754 iops : min= 3072, max= 4304, avg=3688.00, stdev=871.16, samples=2 00:11:25.754 lat (msec) : 10=4.18%, 20=74.54%, 50=20.42%, 100=0.86% 00:11:25.754 cpu : usr=3.68%, sys=8.96%, ctx=471, majf=0, minf=1 00:11:25.754 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:25.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.754 issued rwts: total=3584,3815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.754 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.754 job3: (groupid=0, jobs=1): err= 0: pid=995020: Sun Dec 8 06:15:15 2024 00:11:25.754 read: IOPS=3023, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1016msec) 00:11:25.754 slat (usec): min=2, max=14236, avg=125.28, stdev=872.49 00:11:25.754 clat (usec): min=4119, max=52090, avg=16707.90, stdev=9250.89 00:11:25.754 lat (usec): min=4127, max=52099, avg=16833.18, stdev=9321.55 00:11:25.754 clat percentiles (usec): 00:11:25.754 | 1.00th=[ 4293], 5.00th=[ 4424], 10.00th=[ 4621], 20.00th=[11994], 00:11:25.754 | 30.00th=[12387], 40.00th=[13566], 50.00th=[14091], 60.00th=[15401], 00:11:25.754 | 70.00th=[16450], 80.00th=[22414], 90.00th=[29492], 95.00th=[39060], 00:11:25.754 | 99.00th=[47449], 99.50th=[49021], 99.90th=[51119], 99.95th=[51119], 00:11:25.754 | 99.99th=[52167] 00:11:25.754 write: IOPS=3526, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1016msec); 0 zone resets 00:11:25.754 slat (usec): min=3, max=11310, avg=149.38, stdev=773.95 00:11:25.754 clat (usec): min=357, max=108798, avg=21671.02, stdev=16855.83 00:11:25.754 lat (usec): min=440, max=108808, avg=21820.40, stdev=16957.25 00:11:25.754 clat percentiles (usec): 00:11:25.754 | 1.00th=[ 1631], 5.00th=[ 5669], 10.00th=[ 9110], 20.00th=[ 12125], 00:11:25.754 | 30.00th=[ 13042], 40.00th=[ 16188], 50.00th=[ 19530], 60.00th=[ 20579], 00:11:25.754 | 70.00th=[ 20579], 80.00th=[ 23200], 90.00th=[ 41681], 95.00th=[ 57410], 00:11:25.754 | 99.00th=[ 98042], 99.50th=[107480], 99.90th=[108528], 99.95th=[108528], 00:11:25.754 | 99.99th=[108528] 00:11:25.754 bw ( KiB/s): min=11192, max=16448, per=23.90%, avg=13820.00, stdev=3716.55, samples=2 00:11:25.754 iops : min= 2798, max= 4112, avg=3455.00, stdev=929.14, samples=2 00:11:25.754 lat (usec) : 500=0.06%, 1000=0.18% 00:11:25.754 lat (msec) : 2=1.14%, 4=0.92%, 10=11.63%, 20=49.83%, 50=31.98% 00:11:25.754 lat (msec) : 100=3.92%, 250=0.35% 00:11:25.754 cpu : usr=2.96%, sys=4.33%, ctx=422, majf=0, minf=1 00:11:25.754 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:25.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.754 issued rwts: total=3072,3583,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.754 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.754 00:11:25.754 Run status group 0 (all jobs): 00:11:25.754 READ: bw=51.9MiB/s (54.5MB/s), 11.8MiB/s-14.7MiB/s (12.4MB/s-15.4MB/s), io=54.2MiB (56.8MB), run=1006-1043msec 00:11:25.754 WRITE: bw=56.5MiB/s (59.2MB/s), 13.4MiB/s-15.8MiB/s (14.1MB/s-16.5MB/s), io=58.9MiB (61.8MB), run=1006-1043msec 00:11:25.754 00:11:25.754 Disk stats (read/write): 00:11:25.754 nvme0n1: ios=2610/2727, merge=0/0, ticks=16290/9306, in_queue=25596, util=86.47% 00:11:25.754 nvme0n2: ios=3092/3359, merge=0/0, ticks=39558/53909, in_queue=93467, util=86.89% 00:11:25.754 nvme0n3: ios=2727/3072, merge=0/0, ticks=18466/31865, in_queue=50331, util=88.92% 00:11:25.754 nvme0n4: ios=3072/3111, merge=0/0, ticks=28387/31537, in_queue=59924, util=89.57% 00:11:25.754 06:15:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:25.754 [global] 00:11:25.754 thread=1 00:11:25.754 invalidate=1 00:11:25.754 rw=randwrite 00:11:25.754 time_based=1 00:11:25.754 runtime=1 00:11:25.754 ioengine=libaio 00:11:25.754 direct=1 00:11:25.754 bs=4096 00:11:25.754 iodepth=128 00:11:25.754 norandommap=0 00:11:25.754 numjobs=1 00:11:25.754 00:11:25.754 verify_dump=1 00:11:25.754 verify_backlog=512 00:11:25.754 verify_state_save=0 00:11:25.754 do_verify=1 00:11:25.754 verify=crc32c-intel 00:11:25.754 [job0] 00:11:25.754 filename=/dev/nvme0n1 00:11:25.754 [job1] 00:11:25.754 filename=/dev/nvme0n2 00:11:25.754 [job2] 00:11:25.754 filename=/dev/nvme0n3 00:11:25.754 [job3] 00:11:25.754 filename=/dev/nvme0n4 00:11:25.754 Could not set queue depth (nvme0n1) 00:11:25.754 Could not set queue depth (nvme0n2) 00:11:25.754 Could not set queue depth (nvme0n3) 00:11:25.754 Could not set queue depth (nvme0n4) 00:11:26.012 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:26.012 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:26.012 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:26.012 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:26.012 fio-3.35 00:11:26.012 Starting 4 threads 00:11:27.401 00:11:27.401 job0: (groupid=0, jobs=1): err= 0: pid=995250: Sun Dec 8 06:15:17 2024 00:11:27.401 read: IOPS=3952, BW=15.4MiB/s (16.2MB/s)(15.5MiB/1005msec) 00:11:27.401 slat (usec): min=3, max=11081, avg=112.22, stdev=731.60 00:11:27.401 clat (usec): min=1612, max=30641, avg=15074.88, stdev=4058.51 00:11:27.401 lat (usec): min=4662, max=30650, avg=15187.10, stdev=4110.99 00:11:27.401 clat percentiles (usec): 00:11:27.401 | 1.00th=[ 8094], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[11994], 00:11:27.401 | 30.00th=[13042], 40.00th=[13960], 50.00th=[14746], 60.00th=[15795], 00:11:27.401 | 70.00th=[16450], 80.00th=[17695], 90.00th=[20841], 95.00th=[22414], 00:11:27.401 | 99.00th=[27395], 99.50th=[28967], 99.90th=[30540], 99.95th=[30540], 00:11:27.401 | 99.99th=[30540] 00:11:27.401 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:11:27.401 slat (usec): min=4, max=38007, avg=120.10, stdev=855.60 00:11:27.401 clat (usec): min=5359, max=51314, avg=16411.07, stdev=8507.71 00:11:27.401 lat (usec): min=5424, max=51346, avg=16531.17, stdev=8566.05 00:11:27.401 clat percentiles (usec): 00:11:27.401 | 1.00th=[ 7177], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[10552], 00:11:27.401 | 30.00th=[11994], 40.00th=[12780], 50.00th=[13566], 60.00th=[15533], 00:11:27.401 | 70.00th=[19268], 80.00th=[20579], 90.00th=[21365], 95.00th=[41681], 00:11:27.401 | 99.00th=[49021], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:11:27.401 | 99.99th=[51119] 00:11:27.401 bw ( KiB/s): min=16351, max=16384, per=24.12%, avg=16367.50, stdev=23.33, samples=2 00:11:27.401 iops : min= 4087, max= 4096, avg=4091.50, stdev= 6.36, samples=2 00:11:27.401 lat (msec) : 2=0.01%, 4=0.01%, 10=14.06%, 20=64.58%, 50=20.97% 00:11:27.401 lat (msec) : 100=0.37% 00:11:27.401 cpu : usr=3.88%, sys=10.26%, ctx=291, majf=0, minf=2 00:11:27.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:27.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:27.401 issued rwts: total=3972,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:27.401 job1: (groupid=0, jobs=1): err= 0: pid=995253: Sun Dec 8 06:15:17 2024 00:11:27.401 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:11:27.401 slat (usec): min=2, max=13142, avg=144.14, stdev=872.29 00:11:27.401 clat (usec): min=4447, max=90262, avg=17064.06, stdev=11592.88 00:11:27.401 lat (usec): min=4466, max=90276, avg=17208.19, stdev=11705.59 00:11:27.401 clat percentiles (usec): 00:11:27.401 | 1.00th=[ 6783], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10814], 00:11:27.401 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11863], 60.00th=[13173], 00:11:27.401 | 70.00th=[16909], 80.00th=[21890], 90.00th=[33162], 95.00th=[39584], 00:11:27.401 | 99.00th=[73925], 99.50th=[84411], 99.90th=[90702], 99.95th=[90702], 00:11:27.401 | 99.99th=[90702] 00:11:27.401 write: IOPS=3721, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1004msec); 0 zone resets 00:11:27.401 slat (usec): min=4, max=11536, avg=118.26, stdev=679.71 00:11:27.401 clat (usec): min=599, max=95585, avg=17512.07, stdev=12399.27 00:11:27.401 lat (usec): min=607, max=95605, avg=17630.32, stdev=12456.46 00:11:27.401 clat percentiles (usec): 00:11:27.401 | 1.00th=[ 3720], 5.00th=[ 7635], 10.00th=[ 9372], 20.00th=[10290], 00:11:27.401 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11994], 60.00th=[17957], 00:11:27.401 | 70.00th=[21103], 80.00th=[23462], 90.00th=[27132], 95.00th=[34341], 00:11:27.401 | 99.00th=[88605], 99.50th=[89654], 99.90th=[95945], 99.95th=[95945], 00:11:27.401 | 99.99th=[95945] 00:11:27.401 bw ( KiB/s): min=12488, max=16384, per=21.28%, avg=14436.00, stdev=2754.89, samples=2 00:11:27.401 iops : min= 3122, max= 4096, avg=3609.00, stdev=688.72, samples=2 00:11:27.401 lat (usec) : 750=0.05% 00:11:27.401 lat (msec) : 2=0.08%, 4=0.46%, 10=11.45%, 20=57.02%, 50=28.85% 00:11:27.401 lat (msec) : 100=2.08% 00:11:27.401 cpu : usr=3.79%, sys=7.88%, ctx=324, majf=0, minf=1 00:11:27.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:27.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:27.401 issued rwts: total=3584,3736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:27.401 job2: (groupid=0, jobs=1): err= 0: pid=995254: Sun Dec 8 06:15:17 2024 00:11:27.401 read: IOPS=4345, BW=17.0MiB/s (17.8MB/s)(17.1MiB/1005msec) 00:11:27.401 slat (usec): min=2, max=19911, avg=105.41, stdev=708.00 00:11:27.401 clat (usec): min=934, max=39592, avg=13556.11, stdev=4515.25 00:11:27.401 lat (usec): min=4357, max=59504, avg=13661.52, stdev=4572.65 00:11:27.401 clat percentiles (usec): 00:11:27.401 | 1.00th=[ 5932], 5.00th=[ 7832], 10.00th=[10028], 20.00th=[11469], 00:11:27.401 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12518], 60.00th=[13304], 00:11:27.401 | 70.00th=[13829], 80.00th=[14615], 90.00th=[19006], 95.00th=[21890], 00:11:27.401 | 99.00th=[32637], 99.50th=[35390], 99.90th=[39584], 99.95th=[39584], 00:11:27.401 | 99.99th=[39584] 00:11:27.401 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:11:27.401 slat (usec): min=3, max=20106, avg=104.73, stdev=777.03 00:11:27.401 clat (usec): min=5438, max=52422, avg=14740.63, stdev=5974.83 00:11:27.401 lat (usec): min=5444, max=52457, avg=14845.36, stdev=6036.59 00:11:27.401 clat percentiles (usec): 00:11:27.401 | 1.00th=[ 7046], 5.00th=[ 9765], 10.00th=[11469], 20.00th=[11731], 00:11:27.401 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12649], 60.00th=[13042], 00:11:27.401 | 70.00th=[13566], 80.00th=[15401], 90.00th=[24511], 95.00th=[32113], 00:11:27.401 | 99.00th=[36439], 99.50th=[36439], 99.90th=[44303], 99.95th=[44303], 00:11:27.401 | 99.99th=[52167] 00:11:27.401 bw ( KiB/s): min=16351, max=20480, per=27.14%, avg=18415.50, stdev=2919.64, samples=2 00:11:27.401 iops : min= 4087, max= 5120, avg=4603.50, stdev=730.44, samples=2 00:11:27.401 lat (usec) : 1000=0.01% 00:11:27.401 lat (msec) : 10=7.50%, 20=82.40%, 50=10.08%, 100=0.01% 00:11:27.401 cpu : usr=4.28%, sys=9.26%, ctx=366, majf=0, minf=1 00:11:27.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:27.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:27.401 issued rwts: total=4367,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:27.401 job3: (groupid=0, jobs=1): err= 0: pid=995255: Sun Dec 8 06:15:17 2024 00:11:27.401 read: IOPS=4369, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1002msec) 00:11:27.401 slat (usec): min=3, max=11338, avg=105.33, stdev=504.75 00:11:27.401 clat (usec): min=1646, max=38899, avg=14102.55, stdev=3996.79 00:11:27.401 lat (usec): min=1664, max=38907, avg=14207.88, stdev=3993.30 00:11:27.401 clat percentiles (usec): 00:11:27.401 | 1.00th=[ 5735], 5.00th=[11076], 10.00th=[11731], 20.00th=[12649], 00:11:27.401 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:11:27.401 | 70.00th=[14222], 80.00th=[14484], 90.00th=[15139], 95.00th=[15795], 00:11:27.401 | 99.00th=[37487], 99.50th=[38536], 99.90th=[39060], 99.95th=[39060], 00:11:27.401 | 99.99th=[39060] 00:11:27.402 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:11:27.402 slat (usec): min=5, max=12084, avg=105.29, stdev=565.95 00:11:27.402 clat (usec): min=9295, max=49105, avg=13933.19, stdev=5228.65 00:11:27.402 lat (usec): min=9358, max=49117, avg=14038.48, stdev=5246.75 00:11:27.402 clat percentiles (usec): 00:11:27.402 | 1.00th=[ 9765], 5.00th=[10814], 10.00th=[11338], 20.00th=[11863], 00:11:27.402 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13042], 60.00th=[13698], 00:11:27.402 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14746], 95.00th=[15795], 00:11:27.402 | 99.00th=[45351], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:11:27.402 | 99.99th=[49021] 00:11:27.402 bw ( KiB/s): min=16384, max=20480, per=27.16%, avg=18432.00, stdev=2896.31, samples=2 00:11:27.402 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:11:27.402 lat (msec) : 2=0.09%, 4=0.08%, 10=1.44%, 20=94.66%, 50=3.74% 00:11:27.402 cpu : usr=5.69%, sys=10.79%, ctx=440, majf=0, minf=1 00:11:27.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:27.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:27.402 issued rwts: total=4378,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:27.402 00:11:27.402 Run status group 0 (all jobs): 00:11:27.402 READ: bw=63.4MiB/s (66.4MB/s), 13.9MiB/s-17.1MiB/s (14.6MB/s-17.9MB/s), io=63.7MiB (66.8MB), run=1002-1005msec 00:11:27.402 WRITE: bw=66.3MiB/s (69.5MB/s), 14.5MiB/s-18.0MiB/s (15.2MB/s-18.8MB/s), io=66.6MiB (69.8MB), run=1002-1005msec 00:11:27.402 00:11:27.402 Disk stats (read/write): 00:11:27.402 nvme0n1: ios=3171/3584, merge=0/0, ticks=30477/33076, in_queue=63553, util=89.78% 00:11:27.402 nvme0n2: ios=3157/3584, merge=0/0, ticks=29863/35883, in_queue=65746, util=98.88% 00:11:27.402 nvme0n3: ios=3641/3890, merge=0/0, ticks=25201/25578, in_queue=50779, util=91.26% 00:11:27.402 nvme0n4: ios=3609/4027, merge=0/0, ticks=13352/12993, in_queue=26345, util=98.22% 00:11:27.402 06:15:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:27.402 06:15:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=995393 00:11:27.402 06:15:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:27.402 06:15:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:27.402 [global] 00:11:27.402 thread=1 00:11:27.402 invalidate=1 00:11:27.402 rw=read 00:11:27.402 time_based=1 00:11:27.402 runtime=10 00:11:27.402 ioengine=libaio 00:11:27.402 direct=1 00:11:27.402 bs=4096 00:11:27.402 iodepth=1 00:11:27.402 norandommap=1 00:11:27.402 numjobs=1 00:11:27.402 00:11:27.402 [job0] 00:11:27.402 filename=/dev/nvme0n1 00:11:27.402 [job1] 00:11:27.402 filename=/dev/nvme0n2 00:11:27.402 [job2] 00:11:27.402 filename=/dev/nvme0n3 00:11:27.402 [job3] 00:11:27.402 filename=/dev/nvme0n4 00:11:27.402 Could not set queue depth (nvme0n1) 00:11:27.402 Could not set queue depth (nvme0n2) 00:11:27.402 Could not set queue depth (nvme0n3) 00:11:27.402 Could not set queue depth (nvme0n4) 00:11:27.402 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.402 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.402 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.402 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.402 fio-3.35 00:11:27.402 Starting 4 threads 00:11:30.692 06:15:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:30.692 06:15:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:30.692 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10952704, buflen=4096 00:11:30.692 fio: pid=995496, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:30.692 06:15:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:30.692 06:15:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:30.973 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=352256, buflen=4096 00:11:30.973 fio: pid=995495, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:31.231 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=53719040, buflen=4096 00:11:31.231 fio: pid=995488, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:31.231 06:15:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:31.231 06:15:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:31.490 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=59170816, buflen=4096 00:11:31.490 fio: pid=995494, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:31.490 06:15:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:31.490 06:15:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:31.490 00:11:31.490 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=995488: Sun Dec 8 06:15:21 2024 00:11:31.490 read: IOPS=3658, BW=14.3MiB/s (15.0MB/s)(51.2MiB/3585msec) 00:11:31.490 slat (usec): min=5, max=18597, avg=13.87, stdev=254.82 00:11:31.490 clat (usec): min=168, max=41321, avg=254.53, stdev=580.87 00:11:31.490 lat (usec): min=174, max=41328, avg=268.40, stdev=634.73 00:11:31.490 clat percentiles (usec): 00:11:31.490 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 212], 00:11:31.490 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 245], 00:11:31.490 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 289], 95.00th=[ 306], 00:11:31.490 | 99.00th=[ 453], 99.50th=[ 502], 99.90th=[ 619], 99.95th=[ 3064], 00:11:31.490 | 99.99th=[41157] 00:11:31.490 bw ( KiB/s): min=13928, max=16616, per=47.92%, avg=15028.00, stdev=1083.10, samples=6 00:11:31.490 iops : min= 3482, max= 4154, avg=3757.00, stdev=270.78, samples=6 00:11:31.490 lat (usec) : 250=65.28%, 500=34.16%, 750=0.49%, 1000=0.01% 00:11:31.490 lat (msec) : 4=0.01%, 10=0.02%, 20=0.01%, 50=0.02% 00:11:31.490 cpu : usr=1.62%, sys=6.17%, ctx=13122, majf=0, minf=1 00:11:31.490 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.490 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.490 issued rwts: total=13116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.490 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=995494: Sun Dec 8 06:15:21 2024 00:11:31.490 read: IOPS=3735, BW=14.6MiB/s (15.3MB/s)(56.4MiB/3867msec) 00:11:31.490 slat (usec): min=4, max=13690, avg=14.22, stdev=225.89 00:11:31.490 clat (usec): min=162, max=41288, avg=250.09, stdev=589.58 00:11:31.490 lat (usec): min=168, max=41297, avg=264.32, stdev=632.17 00:11:31.490 clat percentiles (usec): 00:11:31.490 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 194], 20.00th=[ 204], 00:11:31.490 | 30.00th=[ 215], 40.00th=[ 225], 50.00th=[ 237], 60.00th=[ 247], 00:11:31.490 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 310], 00:11:31.490 | 99.00th=[ 404], 99.50th=[ 469], 99.90th=[ 586], 99.95th=[ 816], 00:11:31.490 | 99.99th=[41157] 00:11:31.490 bw ( KiB/s): min=11560, max=16936, per=46.92%, avg=14715.71, stdev=1645.17, samples=7 00:11:31.490 iops : min= 2890, max= 4234, avg=3678.86, stdev=411.26, samples=7 00:11:31.490 lat (usec) : 250=62.17%, 500=37.52%, 750=0.26%, 1000=0.03% 00:11:31.490 lat (msec) : 2=0.01%, 50=0.02% 00:11:31.490 cpu : usr=2.17%, sys=5.85%, ctx=14456, majf=0, minf=2 00:11:31.490 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.490 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.490 issued rwts: total=14447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.490 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=995495: Sun Dec 8 06:15:21 2024 00:11:31.490 read: IOPS=26, BW=104KiB/s (106kB/s)(344KiB/3308msec) 00:11:31.490 slat (nsec): min=7932, max=52223, avg=18533.72, stdev=9426.23 00:11:31.490 clat (usec): min=290, max=58004, avg=38104.82, stdev=11370.69 00:11:31.490 lat (usec): min=300, max=58017, avg=38123.46, stdev=11370.40 00:11:31.490 clat percentiles (usec): 00:11:31.490 | 1.00th=[ 289], 5.00th=[ 482], 10.00th=[40633], 20.00th=[41157], 00:11:31.490 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:31.490 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:31.490 | 99.00th=[57934], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:11:31.490 | 99.99th=[57934] 00:11:31.490 bw ( KiB/s): min= 96, max= 128, per=0.33%, avg=104.00, stdev=12.39, samples=6 00:11:31.490 iops : min= 24, max= 32, avg=26.00, stdev= 3.10, samples=6 00:11:31.490 lat (usec) : 500=5.75%, 750=1.15% 00:11:31.490 lat (msec) : 4=1.15%, 50=89.66%, 100=1.15% 00:11:31.490 cpu : usr=0.09%, sys=0.00%, ctx=89, majf=0, minf=1 00:11:31.490 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.490 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.490 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.490 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=995496: Sun Dec 8 06:15:21 2024 00:11:31.490 read: IOPS=894, BW=3575KiB/s (3661kB/s)(10.4MiB/2992msec) 00:11:31.490 slat (nsec): min=6810, max=62132, avg=11298.87, stdev=5820.55 00:11:31.490 clat (usec): min=190, max=42086, avg=1095.80, stdev=5760.87 00:11:31.490 lat (usec): min=198, max=42098, avg=1107.09, stdev=5761.77 00:11:31.490 clat percentiles (usec): 00:11:31.490 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 221], 00:11:31.490 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 249], 60.00th=[ 269], 00:11:31.490 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 343], 95.00th=[ 449], 00:11:31.490 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:31.490 | 99.99th=[42206] 00:11:31.490 bw ( KiB/s): min= 96, max=14720, per=13.22%, avg=4145.60, stdev=6150.01, samples=5 00:11:31.490 iops : min= 24, max= 3680, avg=1036.40, stdev=1537.50, samples=5 00:11:31.490 lat (usec) : 250=51.03%, 500=45.31%, 750=1.50%, 1000=0.11% 00:11:31.490 lat (msec) : 50=2.02% 00:11:31.490 cpu : usr=0.43%, sys=1.60%, ctx=2675, majf=0, minf=1 00:11:31.490 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:31.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.490 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.490 issued rwts: total=2675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:31.490 00:11:31.490 Run status group 0 (all jobs): 00:11:31.490 READ: bw=30.6MiB/s (32.1MB/s), 104KiB/s-14.6MiB/s (106kB/s-15.3MB/s), io=118MiB (124MB), run=2992-3867msec 00:11:31.490 00:11:31.490 Disk stats (read/write): 00:11:31.490 nvme0n1: ios=12554/0, merge=0/0, ticks=3352/0, in_queue=3352, util=98.88% 00:11:31.491 nvme0n2: ios=14437/0, merge=0/0, ticks=3521/0, in_queue=3521, util=95.28% 00:11:31.491 nvme0n3: ios=132/0, merge=0/0, ticks=4173/0, in_queue=4173, util=99.91% 00:11:31.491 nvme0n4: ios=2671/0, merge=0/0, ticks=2795/0, in_queue=2795, util=96.75% 00:11:31.749 06:15:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:31.749 06:15:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:32.006 06:15:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:32.006 06:15:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:32.263 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:32.263 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:32.521 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:32.521 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:32.779 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:32.779 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 995393 00:11:32.779 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:32.779 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.037 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:33.037 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:33.037 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:33.037 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.037 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:33.037 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.037 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:33.037 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:33.037 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:33.037 nvmf hotplug test: fio failed as expected 00:11:33.037 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:33.297 rmmod nvme_tcp 00:11:33.297 rmmod nvme_fabrics 00:11:33.297 rmmod nvme_keyring 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 992852 ']' 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 992852 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 992852 ']' 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 992852 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 992852 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 992852' 00:11:33.297 killing process with pid 992852 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 992852 00:11:33.297 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 992852 00:11:33.556 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:33.556 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:33.556 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:33.556 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:33.556 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:33.556 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:33.556 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:33.556 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:33.556 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:33.556 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.556 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.556 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:36.095 00:11:36.095 real 0m24.444s 00:11:36.095 user 1m26.205s 00:11:36.095 sys 0m7.434s 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.095 ************************************ 00:11:36.095 END TEST nvmf_fio_target 00:11:36.095 ************************************ 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:36.095 ************************************ 00:11:36.095 START TEST nvmf_bdevio 00:11:36.095 ************************************ 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:36.095 * Looking for test storage... 00:11:36.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:36.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.095 --rc genhtml_branch_coverage=1 00:11:36.095 --rc genhtml_function_coverage=1 00:11:36.095 --rc genhtml_legend=1 00:11:36.095 --rc geninfo_all_blocks=1 00:11:36.095 --rc geninfo_unexecuted_blocks=1 00:11:36.095 00:11:36.095 ' 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:36.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.095 --rc genhtml_branch_coverage=1 00:11:36.095 --rc genhtml_function_coverage=1 00:11:36.095 --rc genhtml_legend=1 00:11:36.095 --rc geninfo_all_blocks=1 00:11:36.095 --rc geninfo_unexecuted_blocks=1 00:11:36.095 00:11:36.095 ' 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:36.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.095 --rc genhtml_branch_coverage=1 00:11:36.095 --rc genhtml_function_coverage=1 00:11:36.095 --rc genhtml_legend=1 00:11:36.095 --rc geninfo_all_blocks=1 00:11:36.095 --rc geninfo_unexecuted_blocks=1 00:11:36.095 00:11:36.095 ' 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:36.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.095 --rc genhtml_branch_coverage=1 00:11:36.095 --rc genhtml_function_coverage=1 00:11:36.095 --rc genhtml_legend=1 00:11:36.095 --rc geninfo_all_blocks=1 00:11:36.095 --rc geninfo_unexecuted_blocks=1 00:11:36.095 00:11:36.095 ' 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.095 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:36.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:36.096 06:15:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:38.016 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:38.016 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:38.016 Found net devices under 0000:84:00.0: cvl_0_0 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:38.016 Found net devices under 0000:84:00.1: cvl_0_1 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.016 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.017 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:38.017 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:38.017 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.017 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.017 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:38.017 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:38.017 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.017 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.017 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.017 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.017 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:38.017 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:38.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:11:38.277 00:11:38.277 --- 10.0.0.2 ping statistics --- 00:11:38.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.277 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:11:38.277 00:11:38.277 --- 10.0.0.1 ping statistics --- 00:11:38.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.277 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=998263 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 998263 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 998263 ']' 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.277 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.277 [2024-12-08 06:15:28.238244] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:11:38.278 [2024-12-08 06:15:28.238342] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.278 [2024-12-08 06:15:28.312900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.278 [2024-12-08 06:15:28.373752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.278 [2024-12-08 06:15:28.373820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.278 [2024-12-08 06:15:28.373850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.278 [2024-12-08 06:15:28.373862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.278 [2024-12-08 06:15:28.373872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.278 [2024-12-08 06:15:28.375622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:38.278 [2024-12-08 06:15:28.375687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:38.278 [2024-12-08 06:15:28.375711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:38.278 [2024-12-08 06:15:28.375715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.537 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.537 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:38.537 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:38.537 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:38.537 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.537 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.537 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:38.537 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.537 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.537 [2024-12-08 06:15:28.532048] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.538 Malloc0 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.538 [2024-12-08 06:15:28.603424] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:38.538 { 00:11:38.538 "params": { 00:11:38.538 "name": "Nvme$subsystem", 00:11:38.538 "trtype": "$TEST_TRANSPORT", 00:11:38.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:38.538 "adrfam": "ipv4", 00:11:38.538 "trsvcid": "$NVMF_PORT", 00:11:38.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:38.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:38.538 "hdgst": ${hdgst:-false}, 00:11:38.538 "ddgst": ${ddgst:-false} 00:11:38.538 }, 00:11:38.538 "method": "bdev_nvme_attach_controller" 00:11:38.538 } 00:11:38.538 EOF 00:11:38.538 )") 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:38.538 06:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:38.538 "params": { 00:11:38.538 "name": "Nvme1", 00:11:38.538 "trtype": "tcp", 00:11:38.538 "traddr": "10.0.0.2", 00:11:38.538 "adrfam": "ipv4", 00:11:38.538 "trsvcid": "4420", 00:11:38.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:38.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:38.538 "hdgst": false, 00:11:38.538 "ddgst": false 00:11:38.538 }, 00:11:38.538 "method": "bdev_nvme_attach_controller" 00:11:38.538 }' 00:11:38.538 [2024-12-08 06:15:28.652649] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:11:38.538 [2024-12-08 06:15:28.652749] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid998292 ] 00:11:38.796 [2024-12-08 06:15:28.723815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:38.796 [2024-12-08 06:15:28.788221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.796 [2024-12-08 06:15:28.788272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.796 [2024-12-08 06:15:28.788276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.057 I/O targets: 00:11:39.057 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:39.057 00:11:39.057 00:11:39.057 CUnit - A unit testing framework for C - Version 2.1-3 00:11:39.057 http://cunit.sourceforge.net/ 00:11:39.057 00:11:39.057 00:11:39.057 Suite: bdevio tests on: Nvme1n1 00:11:39.057 Test: blockdev write read block ...passed 00:11:39.057 Test: blockdev write zeroes read block ...passed 00:11:39.057 Test: blockdev write zeroes read no split ...passed 00:11:39.057 Test: blockdev write zeroes read split ...passed 00:11:39.057 Test: blockdev write zeroes read split partial ...passed 00:11:39.057 Test: blockdev reset ...[2024-12-08 06:15:29.176976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:39.057 [2024-12-08 06:15:29.177105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b19a70 (9): Bad file descriptor 00:11:39.319 [2024-12-08 06:15:29.275643] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:39.319 passed 00:11:39.319 Test: blockdev write read 8 blocks ...passed 00:11:39.319 Test: blockdev write read size > 128k ...passed 00:11:39.319 Test: blockdev write read invalid size ...passed 00:11:39.319 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:39.319 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:39.319 Test: blockdev write read max offset ...passed 00:11:39.580 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:39.580 Test: blockdev writev readv 8 blocks ...passed 00:11:39.580 Test: blockdev writev readv 30 x 1block ...passed 00:11:39.580 Test: blockdev writev readv block ...passed 00:11:39.580 Test: blockdev writev readv size > 128k ...passed 00:11:39.580 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:39.580 Test: blockdev comparev and writev ...[2024-12-08 06:15:29.568401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.580 [2024-12-08 06:15:29.568438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:39.580 [2024-12-08 06:15:29.568463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.580 [2024-12-08 06:15:29.568480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:39.580 [2024-12-08 06:15:29.568893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.580 [2024-12-08 06:15:29.568919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:39.580 [2024-12-08 06:15:29.568941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.580 [2024-12-08 06:15:29.568958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:39.580 [2024-12-08 06:15:29.569306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.580 [2024-12-08 06:15:29.569331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:39.580 [2024-12-08 06:15:29.569354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.580 [2024-12-08 06:15:29.569382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:39.580 [2024-12-08 06:15:29.569763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.580 [2024-12-08 06:15:29.569787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:39.580 [2024-12-08 06:15:29.569809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.580 [2024-12-08 06:15:29.569825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:39.580 passed 00:11:39.580 Test: blockdev nvme passthru rw ...passed 00:11:39.580 Test: blockdev nvme passthru vendor specific ...[2024-12-08 06:15:29.653156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:39.580 [2024-12-08 06:15:29.653182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:39.580 [2024-12-08 06:15:29.653456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:39.580 [2024-12-08 06:15:29.653478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:39.580 [2024-12-08 06:15:29.653641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:39.580 [2024-12-08 06:15:29.653664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:39.580 [2024-12-08 06:15:29.653832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:39.580 [2024-12-08 06:15:29.653855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:39.581 passed 00:11:39.581 Test: blockdev nvme admin passthru ...passed 00:11:39.839 Test: blockdev copy ...passed 00:11:39.839 00:11:39.839 Run Summary: Type Total Ran Passed Failed Inactive 00:11:39.839 suites 1 1 n/a 0 0 00:11:39.839 tests 23 23 23 0 0 00:11:39.839 asserts 152 152 152 0 n/a 00:11:39.839 00:11:39.839 Elapsed time = 1.386 seconds 00:11:39.839 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.839 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.839 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.839 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.839 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:39.839 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:39.839 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:39.839 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:39.839 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:39.839 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:39.839 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.839 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:39.839 rmmod nvme_tcp 00:11:39.839 rmmod nvme_fabrics 00:11:39.839 rmmod nvme_keyring 00:11:40.099 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:40.099 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:40.099 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:40.099 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 998263 ']' 00:11:40.099 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 998263 00:11:40.099 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 998263 ']' 00:11:40.099 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 998263 00:11:40.099 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:40.099 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.099 06:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 998263 00:11:40.099 06:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:40.099 06:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:40.099 06:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 998263' 00:11:40.099 killing process with pid 998263 00:11:40.099 06:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 998263 00:11:40.099 06:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 998263 00:11:40.359 06:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:40.359 06:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:40.359 06:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:40.359 06:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:40.360 06:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:40.360 06:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:40.360 06:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:40.360 06:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:40.360 06:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:40.360 06:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.360 06:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.360 06:15:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.270 06:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:42.270 00:11:42.270 real 0m6.663s 00:11:42.270 user 0m10.821s 00:11:42.270 sys 0m2.246s 00:11:42.270 06:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.270 06:15:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.270 ************************************ 00:11:42.270 END TEST nvmf_bdevio 00:11:42.270 ************************************ 00:11:42.270 06:15:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:42.270 00:11:42.270 real 3m57.156s 00:11:42.270 user 10m18.929s 00:11:42.270 sys 1m10.619s 00:11:42.270 06:15:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.270 06:15:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:42.270 ************************************ 00:11:42.270 END TEST nvmf_target_core 00:11:42.270 ************************************ 00:11:42.270 06:15:32 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:42.270 06:15:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.270 06:15:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.270 06:15:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:42.530 ************************************ 00:11:42.530 START TEST nvmf_target_extra 00:11:42.530 ************************************ 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:42.530 * Looking for test storage... 00:11:42.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:42.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.530 --rc genhtml_branch_coverage=1 00:11:42.530 --rc genhtml_function_coverage=1 00:11:42.530 --rc genhtml_legend=1 00:11:42.530 --rc geninfo_all_blocks=1 00:11:42.530 --rc geninfo_unexecuted_blocks=1 00:11:42.530 00:11:42.530 ' 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:42.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.530 --rc genhtml_branch_coverage=1 00:11:42.530 --rc genhtml_function_coverage=1 00:11:42.530 --rc genhtml_legend=1 00:11:42.530 --rc geninfo_all_blocks=1 00:11:42.530 --rc geninfo_unexecuted_blocks=1 00:11:42.530 00:11:42.530 ' 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:42.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.530 --rc genhtml_branch_coverage=1 00:11:42.530 --rc genhtml_function_coverage=1 00:11:42.530 --rc genhtml_legend=1 00:11:42.530 --rc geninfo_all_blocks=1 00:11:42.530 --rc geninfo_unexecuted_blocks=1 00:11:42.530 00:11:42.530 ' 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:42.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.530 --rc genhtml_branch_coverage=1 00:11:42.530 --rc genhtml_function_coverage=1 00:11:42.530 --rc genhtml_legend=1 00:11:42.530 --rc geninfo_all_blocks=1 00:11:42.530 --rc geninfo_unexecuted_blocks=1 00:11:42.530 00:11:42.530 ' 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.530 06:15:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.531 ************************************ 00:11:42.531 START TEST nvmf_example 00:11:42.531 ************************************ 00:11:42.531 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:42.791 * Looking for test storage... 00:11:42.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:42.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.791 --rc genhtml_branch_coverage=1 00:11:42.791 --rc genhtml_function_coverage=1 00:11:42.791 --rc genhtml_legend=1 00:11:42.791 --rc geninfo_all_blocks=1 00:11:42.791 --rc geninfo_unexecuted_blocks=1 00:11:42.791 00:11:42.791 ' 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:42.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.791 --rc genhtml_branch_coverage=1 00:11:42.791 --rc genhtml_function_coverage=1 00:11:42.791 --rc genhtml_legend=1 00:11:42.791 --rc geninfo_all_blocks=1 00:11:42.791 --rc geninfo_unexecuted_blocks=1 00:11:42.791 00:11:42.791 ' 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:42.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.791 --rc genhtml_branch_coverage=1 00:11:42.791 --rc genhtml_function_coverage=1 00:11:42.791 --rc genhtml_legend=1 00:11:42.791 --rc geninfo_all_blocks=1 00:11:42.791 --rc geninfo_unexecuted_blocks=1 00:11:42.791 00:11:42.791 ' 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:42.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.791 --rc genhtml_branch_coverage=1 00:11:42.791 --rc genhtml_function_coverage=1 00:11:42.791 --rc genhtml_legend=1 00:11:42.791 --rc geninfo_all_blocks=1 00:11:42.791 --rc geninfo_unexecuted_blocks=1 00:11:42.791 00:11:42.791 ' 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.791 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:42.792 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:45.394 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:45.394 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.394 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:45.395 Found net devices under 0000:84:00.0: cvl_0_0 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:45.395 Found net devices under 0000:84:00.1: cvl_0_1 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:45.395 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:45.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:11:45.395 00:11:45.395 --- 10.0.0.2 ping statistics --- 00:11:45.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.395 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:45.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:11:45.395 00:11:45.395 --- 10.0.0.1 ping statistics --- 00:11:45.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.395 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1000574 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1000574 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1000574 ']' 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.395 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:46.330 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.330 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:46.330 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:46.331 06:15:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:56.315 Initializing NVMe Controllers 00:11:56.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:56.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:56.315 Initialization complete. Launching workers. 00:11:56.315 ======================================================== 00:11:56.315 Latency(us) 00:11:56.315 Device Information : IOPS MiB/s Average min max 00:11:56.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14960.67 58.44 4277.11 789.73 15385.36 00:11:56.315 ======================================================== 00:11:56.315 Total : 14960.67 58.44 4277.11 789.73 15385.36 00:11:56.315 00:11:56.315 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:56.315 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:56.315 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:56.315 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:56.315 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:56.315 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:56.315 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:56.315 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:56.315 rmmod nvme_tcp 00:11:56.315 rmmod nvme_fabrics 00:11:56.315 rmmod nvme_keyring 00:11:56.574 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:56.574 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:56.574 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:56.574 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1000574 ']' 00:11:56.574 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1000574 00:11:56.574 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1000574 ']' 00:11:56.574 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1000574 00:11:56.574 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:56.574 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.574 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1000574 00:11:56.574 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:56.574 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:56.574 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1000574' 00:11:56.574 killing process with pid 1000574 00:11:56.574 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1000574 00:11:56.574 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1000574 00:11:56.834 nvmf threads initialize successfully 00:11:56.834 bdev subsystem init successfully 00:11:56.834 created a nvmf target service 00:11:56.834 create targets's poll groups done 00:11:56.834 all subsystems of target started 00:11:56.834 nvmf target is running 00:11:56.834 all subsystems of target stopped 00:11:56.834 destroy targets's poll groups done 00:11:56.834 destroyed the nvmf target service 00:11:56.834 bdev subsystem finish successfully 00:11:56.834 nvmf threads destroy successfully 00:11:56.834 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:56.834 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:56.834 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:56.834 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:56.834 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:56.834 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:56.834 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:56.834 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:56.834 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:56.834 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.834 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.834 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.742 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:58.742 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:58.742 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:58.742 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:58.742 00:11:58.742 real 0m16.219s 00:11:58.742 user 0m45.259s 00:11:58.742 sys 0m3.659s 00:11:58.742 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.742 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:58.742 ************************************ 00:11:58.742 END TEST nvmf_example 00:11:58.742 ************************************ 00:11:58.742 06:15:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:58.742 06:15:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:58.742 06:15:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.742 06:15:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:59.003 ************************************ 00:11:59.003 START TEST nvmf_filesystem 00:11:59.003 ************************************ 00:11:59.003 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:59.003 * Looking for test storage... 00:11:59.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.003 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:59.003 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:59.003 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.003 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:59.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.003 --rc genhtml_branch_coverage=1 00:11:59.004 --rc genhtml_function_coverage=1 00:11:59.004 --rc genhtml_legend=1 00:11:59.004 --rc geninfo_all_blocks=1 00:11:59.004 --rc geninfo_unexecuted_blocks=1 00:11:59.004 00:11:59.004 ' 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:59.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.004 --rc genhtml_branch_coverage=1 00:11:59.004 --rc genhtml_function_coverage=1 00:11:59.004 --rc genhtml_legend=1 00:11:59.004 --rc geninfo_all_blocks=1 00:11:59.004 --rc geninfo_unexecuted_blocks=1 00:11:59.004 00:11:59.004 ' 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:59.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.004 --rc genhtml_branch_coverage=1 00:11:59.004 --rc genhtml_function_coverage=1 00:11:59.004 --rc genhtml_legend=1 00:11:59.004 --rc geninfo_all_blocks=1 00:11:59.004 --rc geninfo_unexecuted_blocks=1 00:11:59.004 00:11:59.004 ' 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:59.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.004 --rc genhtml_branch_coverage=1 00:11:59.004 --rc genhtml_function_coverage=1 00:11:59.004 --rc genhtml_legend=1 00:11:59.004 --rc geninfo_all_blocks=1 00:11:59.004 --rc geninfo_unexecuted_blocks=1 00:11:59.004 00:11:59.004 ' 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:59.004 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:59.005 #define SPDK_CONFIG_H 00:11:59.005 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:59.005 #define SPDK_CONFIG_APPS 1 00:11:59.005 #define SPDK_CONFIG_ARCH native 00:11:59.005 #undef SPDK_CONFIG_ASAN 00:11:59.005 #undef SPDK_CONFIG_AVAHI 00:11:59.005 #undef SPDK_CONFIG_CET 00:11:59.005 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:59.005 #define SPDK_CONFIG_COVERAGE 1 00:11:59.005 #define SPDK_CONFIG_CROSS_PREFIX 00:11:59.005 #undef SPDK_CONFIG_CRYPTO 00:11:59.005 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:59.005 #undef SPDK_CONFIG_CUSTOMOCF 00:11:59.005 #undef SPDK_CONFIG_DAOS 00:11:59.005 #define SPDK_CONFIG_DAOS_DIR 00:11:59.005 #define SPDK_CONFIG_DEBUG 1 00:11:59.005 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:59.005 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:59.005 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:59.005 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:59.005 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:59.005 #undef SPDK_CONFIG_DPDK_UADK 00:11:59.005 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:59.005 #define SPDK_CONFIG_EXAMPLES 1 00:11:59.005 #undef SPDK_CONFIG_FC 00:11:59.005 #define SPDK_CONFIG_FC_PATH 00:11:59.005 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:59.005 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:59.005 #define SPDK_CONFIG_FSDEV 1 00:11:59.005 #undef SPDK_CONFIG_FUSE 00:11:59.005 #undef SPDK_CONFIG_FUZZER 00:11:59.005 #define SPDK_CONFIG_FUZZER_LIB 00:11:59.005 #undef SPDK_CONFIG_GOLANG 00:11:59.005 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:59.005 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:59.005 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:59.005 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:59.005 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:59.005 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:59.005 #undef SPDK_CONFIG_HAVE_LZ4 00:11:59.005 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:59.005 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:59.005 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:59.005 #define SPDK_CONFIG_IDXD 1 00:11:59.005 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:59.005 #undef SPDK_CONFIG_IPSEC_MB 00:11:59.005 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:59.005 #define SPDK_CONFIG_ISAL 1 00:11:59.005 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:59.005 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:59.005 #define SPDK_CONFIG_LIBDIR 00:11:59.005 #undef SPDK_CONFIG_LTO 00:11:59.005 #define SPDK_CONFIG_MAX_LCORES 128 00:11:59.005 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:59.005 #define SPDK_CONFIG_NVME_CUSE 1 00:11:59.005 #undef SPDK_CONFIG_OCF 00:11:59.005 #define SPDK_CONFIG_OCF_PATH 00:11:59.005 #define SPDK_CONFIG_OPENSSL_PATH 00:11:59.005 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:59.005 #define SPDK_CONFIG_PGO_DIR 00:11:59.005 #undef SPDK_CONFIG_PGO_USE 00:11:59.005 #define SPDK_CONFIG_PREFIX /usr/local 00:11:59.005 #undef SPDK_CONFIG_RAID5F 00:11:59.005 #undef SPDK_CONFIG_RBD 00:11:59.005 #define SPDK_CONFIG_RDMA 1 00:11:59.005 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:59.005 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:59.005 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:59.005 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:59.005 #define SPDK_CONFIG_SHARED 1 00:11:59.005 #undef SPDK_CONFIG_SMA 00:11:59.005 #define SPDK_CONFIG_TESTS 1 00:11:59.005 #undef SPDK_CONFIG_TSAN 00:11:59.005 #define SPDK_CONFIG_UBLK 1 00:11:59.005 #define SPDK_CONFIG_UBSAN 1 00:11:59.005 #undef SPDK_CONFIG_UNIT_TESTS 00:11:59.005 #undef SPDK_CONFIG_URING 00:11:59.005 #define SPDK_CONFIG_URING_PATH 00:11:59.005 #undef SPDK_CONFIG_URING_ZNS 00:11:59.005 #undef SPDK_CONFIG_USDT 00:11:59.005 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:59.005 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:59.005 #define SPDK_CONFIG_VFIO_USER 1 00:11:59.005 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:59.005 #define SPDK_CONFIG_VHOST 1 00:11:59.005 #define SPDK_CONFIG_VIRTIO 1 00:11:59.005 #undef SPDK_CONFIG_VTUNE 00:11:59.005 #define SPDK_CONFIG_VTUNE_DIR 00:11:59.005 #define SPDK_CONFIG_WERROR 1 00:11:59.005 #define SPDK_CONFIG_WPDK_DIR 00:11:59.005 #undef SPDK_CONFIG_XNVME 00:11:59.005 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:59.005 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:59.006 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:59.007 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1002271 ]] 00:11:59.008 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1002271 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.kQZ2f5 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.kQZ2f5/tests/target /tmp/spdk.kQZ2f5 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39220273152 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=45077106688 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5856833536 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=22528520192 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=22538551296 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=8993034240 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9015422976 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22388736 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=22538096640 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=22538555392 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=458752 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4507697152 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4507709440 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:59.268 * Looking for test storage... 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=39220273152 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8071426048 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.268 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:59.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.269 --rc genhtml_branch_coverage=1 00:11:59.269 --rc genhtml_function_coverage=1 00:11:59.269 --rc genhtml_legend=1 00:11:59.269 --rc geninfo_all_blocks=1 00:11:59.269 --rc geninfo_unexecuted_blocks=1 00:11:59.269 00:11:59.269 ' 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:59.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.269 --rc genhtml_branch_coverage=1 00:11:59.269 --rc genhtml_function_coverage=1 00:11:59.269 --rc genhtml_legend=1 00:11:59.269 --rc geninfo_all_blocks=1 00:11:59.269 --rc geninfo_unexecuted_blocks=1 00:11:59.269 00:11:59.269 ' 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:59.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.269 --rc genhtml_branch_coverage=1 00:11:59.269 --rc genhtml_function_coverage=1 00:11:59.269 --rc genhtml_legend=1 00:11:59.269 --rc geninfo_all_blocks=1 00:11:59.269 --rc geninfo_unexecuted_blocks=1 00:11:59.269 00:11:59.269 ' 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:59.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.269 --rc genhtml_branch_coverage=1 00:11:59.269 --rc genhtml_function_coverage=1 00:11:59.269 --rc genhtml_legend=1 00:11:59.269 --rc geninfo_all_blocks=1 00:11:59.269 --rc geninfo_unexecuted_blocks=1 00:11:59.269 00:11:59.269 ' 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.269 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:59.270 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:01.912 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:01.912 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:01.912 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:01.913 Found net devices under 0000:84:00.0: cvl_0_0 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:01.913 Found net devices under 0000:84:00.1: cvl_0_1 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:01.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:12:01.913 00:12:01.913 --- 10.0.0.2 ping statistics --- 00:12:01.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.913 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:01.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:12:01.913 00:12:01.913 --- 10.0.0.1 ping statistics --- 00:12:01.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.913 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.913 ************************************ 00:12:01.913 START TEST nvmf_filesystem_no_in_capsule 00:12:01.913 ************************************ 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1004049 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1004049 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1004049 ']' 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.913 [2024-12-08 06:15:51.659927] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:12:01.913 [2024-12-08 06:15:51.660028] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.913 [2024-12-08 06:15:51.735838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:01.913 [2024-12-08 06:15:51.794864] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.913 [2024-12-08 06:15:51.794938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.913 [2024-12-08 06:15:51.794967] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.913 [2024-12-08 06:15:51.794979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.913 [2024-12-08 06:15:51.794989] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.913 [2024-12-08 06:15:51.796744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.913 [2024-12-08 06:15:51.796810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.913 [2024-12-08 06:15:51.796873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.913 [2024-12-08 06:15:51.796877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:01.913 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:01.914 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.914 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.914 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:01.914 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:01.914 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.914 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.914 [2024-12-08 06:15:51.956817] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.914 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.914 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:01.914 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.914 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.174 Malloc1 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.174 [2024-12-08 06:15:52.162468] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.174 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:02.174 { 00:12:02.174 "name": "Malloc1", 00:12:02.174 "aliases": [ 00:12:02.174 "65fb044f-5144-4e91-a23d-6f8dffdeaac2" 00:12:02.174 ], 00:12:02.174 "product_name": "Malloc disk", 00:12:02.174 "block_size": 512, 00:12:02.174 "num_blocks": 1048576, 00:12:02.174 "uuid": "65fb044f-5144-4e91-a23d-6f8dffdeaac2", 00:12:02.174 "assigned_rate_limits": { 00:12:02.174 "rw_ios_per_sec": 0, 00:12:02.174 "rw_mbytes_per_sec": 0, 00:12:02.174 "r_mbytes_per_sec": 0, 00:12:02.174 "w_mbytes_per_sec": 0 00:12:02.174 }, 00:12:02.174 "claimed": true, 00:12:02.174 "claim_type": "exclusive_write", 00:12:02.174 "zoned": false, 00:12:02.174 "supported_io_types": { 00:12:02.174 "read": true, 00:12:02.174 "write": true, 00:12:02.174 "unmap": true, 00:12:02.174 "flush": true, 00:12:02.174 "reset": true, 00:12:02.174 "nvme_admin": false, 00:12:02.174 "nvme_io": false, 00:12:02.174 "nvme_io_md": false, 00:12:02.174 "write_zeroes": true, 00:12:02.174 "zcopy": true, 00:12:02.174 "get_zone_info": false, 00:12:02.174 "zone_management": false, 00:12:02.174 "zone_append": false, 00:12:02.174 "compare": false, 00:12:02.174 "compare_and_write": false, 00:12:02.174 "abort": true, 00:12:02.175 "seek_hole": false, 00:12:02.175 "seek_data": false, 00:12:02.175 "copy": true, 00:12:02.175 "nvme_iov_md": false 00:12:02.175 }, 00:12:02.175 "memory_domains": [ 00:12:02.175 { 00:12:02.175 "dma_device_id": "system", 00:12:02.175 "dma_device_type": 1 00:12:02.175 }, 00:12:02.175 { 00:12:02.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.175 "dma_device_type": 2 00:12:02.175 } 00:12:02.175 ], 00:12:02.175 "driver_specific": {} 00:12:02.175 } 00:12:02.175 ]' 00:12:02.175 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:02.175 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:02.175 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:02.175 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:02.175 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:02.175 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:02.175 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:02.175 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.113 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.113 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:03.113 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.113 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:03.113 06:15:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:05.017 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:05.017 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:05.017 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.018 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:05.018 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.018 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:05.018 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:05.018 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:05.018 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:05.018 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:05.018 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:05.018 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:05.018 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:05.018 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:05.018 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:05.018 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:05.018 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:05.018 06:15:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:05.951 06:15:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:06.885 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:06.885 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:06.885 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:06.885 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.885 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.885 ************************************ 00:12:06.885 START TEST filesystem_ext4 00:12:06.885 ************************************ 00:12:06.885 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:06.885 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:06.885 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:06.885 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:06.885 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:06.885 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:06.885 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:06.885 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:06.885 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:06.885 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:06.885 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:06.885 mke2fs 1.47.0 (5-Feb-2023) 00:12:07.144 Discarding device blocks: 0/522240 done 00:12:07.144 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:07.144 Filesystem UUID: 4f0f42fe-0b30-4a78-a639-11d1fc9f498d 00:12:07.144 Superblock backups stored on blocks: 00:12:07.144 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:07.144 00:12:07.144 Allocating group tables: 0/64 done 00:12:07.144 Writing inode tables: 0/64 done 00:12:07.144 Creating journal (8192 blocks): done 00:12:07.144 Writing superblocks and filesystem accounting information: 0/64 done 00:12:07.144 00:12:07.144 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:07.144 06:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:13.711 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:13.711 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:13.711 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:13.711 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:13.711 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:13.711 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:13.711 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1004049 00:12:13.711 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:13.711 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:13.711 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:13.711 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:13.711 00:12:13.711 real 0m6.265s 00:12:13.711 user 0m0.018s 00:12:13.711 sys 0m0.055s 00:12:13.711 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.711 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:13.711 ************************************ 00:12:13.711 END TEST filesystem_ext4 00:12:13.711 ************************************ 00:12:13.711 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:13.711 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:13.711 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.712 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.712 ************************************ 00:12:13.712 START TEST filesystem_btrfs 00:12:13.712 ************************************ 00:12:13.712 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:13.712 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:13.712 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:13.712 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:13.712 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:13.712 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:13.712 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:13.712 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:13.712 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:13.712 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:13.712 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:13.712 btrfs-progs v6.8.1 00:12:13.712 See https://btrfs.readthedocs.io for more information. 00:12:13.712 00:12:13.712 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:13.712 NOTE: several default settings have changed in version 5.15, please make sure 00:12:13.712 this does not affect your deployments: 00:12:13.712 - DUP for metadata (-m dup) 00:12:13.712 - enabled no-holes (-O no-holes) 00:12:13.712 - enabled free-space-tree (-R free-space-tree) 00:12:13.712 00:12:13.712 Label: (null) 00:12:13.712 UUID: 09af9bc9-8056-4a61-a76f-8ba9e8f3176c 00:12:13.712 Node size: 16384 00:12:13.712 Sector size: 4096 (CPU page size: 4096) 00:12:13.712 Filesystem size: 510.00MiB 00:12:13.712 Block group profiles: 00:12:13.712 Data: single 8.00MiB 00:12:13.712 Metadata: DUP 32.00MiB 00:12:13.712 System: DUP 8.00MiB 00:12:13.712 SSD detected: yes 00:12:13.712 Zoned device: no 00:12:13.712 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:13.712 Checksum: crc32c 00:12:13.712 Number of devices: 1 00:12:13.712 Devices: 00:12:13.712 ID SIZE PATH 00:12:13.712 1 510.00MiB /dev/nvme0n1p1 00:12:13.712 00:12:13.712 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:13.712 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:14.278 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:14.278 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:14.278 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:14.278 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:14.278 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:14.278 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:14.278 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1004049 00:12:14.278 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:14.278 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:14.278 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:14.278 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:14.278 00:12:14.278 real 0m1.117s 00:12:14.278 user 0m0.025s 00:12:14.278 sys 0m0.093s 00:12:14.278 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.278 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:14.278 ************************************ 00:12:14.278 END TEST filesystem_btrfs 00:12:14.278 ************************************ 00:12:14.278 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:14.278 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:14.278 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.278 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.538 ************************************ 00:12:14.538 START TEST filesystem_xfs 00:12:14.538 ************************************ 00:12:14.538 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:14.538 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:14.538 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:14.538 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:14.538 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:14.538 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:14.538 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:14.538 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:14.538 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:14.538 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:14.538 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:14.538 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:14.538 = sectsz=512 attr=2, projid32bit=1 00:12:14.538 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:14.538 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:14.538 data = bsize=4096 blocks=130560, imaxpct=25 00:12:14.538 = sunit=0 swidth=0 blks 00:12:14.538 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:14.538 log =internal log bsize=4096 blocks=16384, version=2 00:12:14.538 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:14.538 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:15.474 Discarding blocks...Done. 00:12:15.474 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:15.474 06:16:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1004049 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:17.375 00:12:17.375 real 0m2.722s 00:12:17.375 user 0m0.018s 00:12:17.375 sys 0m0.051s 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:17.375 ************************************ 00:12:17.375 END TEST filesystem_xfs 00:12:17.375 ************************************ 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:17.375 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1004049 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1004049 ']' 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1004049 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1004049 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1004049' 00:12:17.634 killing process with pid 1004049 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1004049 00:12:17.634 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1004049 00:12:17.894 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:17.894 00:12:17.894 real 0m16.392s 00:12:17.894 user 1m3.453s 00:12:17.894 sys 0m2.031s 00:12:17.894 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.894 06:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.894 ************************************ 00:12:17.894 END TEST nvmf_filesystem_no_in_capsule 00:12:17.894 ************************************ 00:12:18.152 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:18.152 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:18.152 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.152 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:18.152 ************************************ 00:12:18.152 START TEST nvmf_filesystem_in_capsule 00:12:18.152 ************************************ 00:12:18.152 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:18.152 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:18.152 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:18.152 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.152 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:18.152 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.152 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1006154 00:12:18.153 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.153 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1006154 00:12:18.153 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1006154 ']' 00:12:18.153 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.153 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.153 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.153 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.153 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.153 [2024-12-08 06:16:08.108981] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:12:18.153 [2024-12-08 06:16:08.109107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.153 [2024-12-08 06:16:08.184420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.153 [2024-12-08 06:16:08.243416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.153 [2024-12-08 06:16:08.243474] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.153 [2024-12-08 06:16:08.243503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.153 [2024-12-08 06:16:08.243514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.153 [2024-12-08 06:16:08.243523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.153 [2024-12-08 06:16:08.245103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.153 [2024-12-08 06:16:08.245162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.153 [2024-12-08 06:16:08.245230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.153 [2024-12-08 06:16:08.245233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.413 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.413 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:18.413 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:18.413 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:18.413 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.413 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.413 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:18.413 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:18.413 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.413 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.413 [2024-12-08 06:16:08.386306] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.413 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.413 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:18.413 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.413 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.673 Malloc1 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.673 [2024-12-08 06:16:08.564467] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.673 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:18.673 { 00:12:18.673 "name": "Malloc1", 00:12:18.673 "aliases": [ 00:12:18.673 "21283a05-4d53-4367-bb5e-e2d22e71b090" 00:12:18.673 ], 00:12:18.673 "product_name": "Malloc disk", 00:12:18.673 "block_size": 512, 00:12:18.673 "num_blocks": 1048576, 00:12:18.673 "uuid": "21283a05-4d53-4367-bb5e-e2d22e71b090", 00:12:18.673 "assigned_rate_limits": { 00:12:18.673 "rw_ios_per_sec": 0, 00:12:18.673 "rw_mbytes_per_sec": 0, 00:12:18.673 "r_mbytes_per_sec": 0, 00:12:18.673 "w_mbytes_per_sec": 0 00:12:18.673 }, 00:12:18.673 "claimed": true, 00:12:18.673 "claim_type": "exclusive_write", 00:12:18.673 "zoned": false, 00:12:18.673 "supported_io_types": { 00:12:18.673 "read": true, 00:12:18.673 "write": true, 00:12:18.673 "unmap": true, 00:12:18.673 "flush": true, 00:12:18.673 "reset": true, 00:12:18.673 "nvme_admin": false, 00:12:18.673 "nvme_io": false, 00:12:18.673 "nvme_io_md": false, 00:12:18.673 "write_zeroes": true, 00:12:18.673 "zcopy": true, 00:12:18.673 "get_zone_info": false, 00:12:18.673 "zone_management": false, 00:12:18.673 "zone_append": false, 00:12:18.673 "compare": false, 00:12:18.673 "compare_and_write": false, 00:12:18.673 "abort": true, 00:12:18.673 "seek_hole": false, 00:12:18.673 "seek_data": false, 00:12:18.673 "copy": true, 00:12:18.673 "nvme_iov_md": false 00:12:18.673 }, 00:12:18.673 "memory_domains": [ 00:12:18.673 { 00:12:18.673 "dma_device_id": "system", 00:12:18.673 "dma_device_type": 1 00:12:18.673 }, 00:12:18.673 { 00:12:18.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.673 "dma_device_type": 2 00:12:18.673 } 00:12:18.673 ], 00:12:18.673 "driver_specific": {} 00:12:18.673 } 00:12:18.673 ]' 00:12:18.674 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:18.674 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:18.674 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:18.674 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:18.674 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:18.674 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:18.674 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:18.674 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.609 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.609 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:19.609 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.609 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:19.609 06:16:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:21.517 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:21.517 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:21.517 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.517 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:21.517 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.517 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:21.517 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:21.517 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:21.517 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:21.517 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:21.517 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:21.517 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:21.517 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:21.517 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:21.517 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:21.517 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:21.517 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:21.777 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:22.035 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:22.987 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:22.987 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:22.987 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:22.987 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.987 06:16:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.987 ************************************ 00:12:22.987 START TEST filesystem_in_capsule_ext4 00:12:22.987 ************************************ 00:12:22.987 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:22.987 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:22.987 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:22.987 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:22.987 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:22.987 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:22.987 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:22.987 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:22.987 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:22.987 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:22.987 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:22.987 mke2fs 1.47.0 (5-Feb-2023) 00:12:23.247 Discarding device blocks: 0/522240 done 00:12:23.247 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:23.247 Filesystem UUID: cdfaa86c-eb11-4900-bb1b-2a07cd939cbb 00:12:23.247 Superblock backups stored on blocks: 00:12:23.247 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:23.247 00:12:23.247 Allocating group tables: 0/64 done 00:12:23.247 Writing inode tables: 0/64 done 00:12:23.247 Creating journal (8192 blocks): done 00:12:23.247 Writing superblocks and filesystem accounting information: 0/64 done 00:12:23.247 00:12:23.247 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:23.247 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1006154 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:29.807 00:12:29.807 real 0m5.781s 00:12:29.807 user 0m0.016s 00:12:29.807 sys 0m0.058s 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:29.807 ************************************ 00:12:29.807 END TEST filesystem_in_capsule_ext4 00:12:29.807 ************************************ 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.807 ************************************ 00:12:29.807 START TEST filesystem_in_capsule_btrfs 00:12:29.807 ************************************ 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:29.807 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:29.807 btrfs-progs v6.8.1 00:12:29.807 See https://btrfs.readthedocs.io for more information. 00:12:29.807 00:12:29.807 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:29.807 NOTE: several default settings have changed in version 5.15, please make sure 00:12:29.807 this does not affect your deployments: 00:12:29.807 - DUP for metadata (-m dup) 00:12:29.807 - enabled no-holes (-O no-holes) 00:12:29.807 - enabled free-space-tree (-R free-space-tree) 00:12:29.807 00:12:29.807 Label: (null) 00:12:29.807 UUID: 8d5eb4e0-1922-42ae-937b-701f2b2cec20 00:12:29.807 Node size: 16384 00:12:29.807 Sector size: 4096 (CPU page size: 4096) 00:12:29.807 Filesystem size: 510.00MiB 00:12:29.807 Block group profiles: 00:12:29.807 Data: single 8.00MiB 00:12:29.807 Metadata: DUP 32.00MiB 00:12:29.807 System: DUP 8.00MiB 00:12:29.807 SSD detected: yes 00:12:29.807 Zoned device: no 00:12:29.807 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:29.807 Checksum: crc32c 00:12:29.807 Number of devices: 1 00:12:29.807 Devices: 00:12:29.807 ID SIZE PATH 00:12:29.807 1 510.00MiB /dev/nvme0n1p1 00:12:29.807 00:12:29.807 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:29.807 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:29.807 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:29.807 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:29.807 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:29.807 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:29.807 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:29.807 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:29.807 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1006154 00:12:29.807 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:29.808 00:12:29.808 real 0m0.486s 00:12:29.808 user 0m0.019s 00:12:29.808 sys 0m0.099s 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:29.808 ************************************ 00:12:29.808 END TEST filesystem_in_capsule_btrfs 00:12:29.808 ************************************ 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.808 ************************************ 00:12:29.808 START TEST filesystem_in_capsule_xfs 00:12:29.808 ************************************ 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:29.808 06:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:29.808 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:29.808 = sectsz=512 attr=2, projid32bit=1 00:12:29.808 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:29.808 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:29.808 data = bsize=4096 blocks=130560, imaxpct=25 00:12:29.808 = sunit=0 swidth=0 blks 00:12:29.808 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:29.808 log =internal log bsize=4096 blocks=16384, version=2 00:12:29.808 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:29.808 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:30.374 Discarding blocks...Done. 00:12:30.374 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:30.374 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:32.909 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:32.909 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:32.909 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:32.909 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:32.909 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:32.909 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:32.909 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1006154 00:12:32.909 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:32.909 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:32.909 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:32.909 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:32.909 00:12:32.909 real 0m3.593s 00:12:32.909 user 0m0.013s 00:12:32.909 sys 0m0.068s 00:12:32.909 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.909 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:32.909 ************************************ 00:12:32.909 END TEST filesystem_in_capsule_xfs 00:12:32.909 ************************************ 00:12:32.909 06:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:32.909 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:32.909 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1006154 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1006154 ']' 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1006154 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1006154 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1006154' 00:12:33.171 killing process with pid 1006154 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1006154 00:12:33.171 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1006154 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:33.739 00:12:33.739 real 0m15.578s 00:12:33.739 user 1m0.306s 00:12:33.739 sys 0m1.951s 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.739 ************************************ 00:12:33.739 END TEST nvmf_filesystem_in_capsule 00:12:33.739 ************************************ 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:33.739 rmmod nvme_tcp 00:12:33.739 rmmod nvme_fabrics 00:12:33.739 rmmod nvme_keyring 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.739 06:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.646 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:35.646 00:12:35.646 real 0m36.869s 00:12:35.646 user 2m4.865s 00:12:35.646 sys 0m5.805s 00:12:35.646 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.646 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.646 ************************************ 00:12:35.646 END TEST nvmf_filesystem 00:12:35.646 ************************************ 00:12:35.905 06:16:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:35.905 06:16:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:35.905 06:16:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.905 06:16:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:35.905 ************************************ 00:12:35.905 START TEST nvmf_target_discovery 00:12:35.905 ************************************ 00:12:35.905 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:35.905 * Looking for test storage... 00:12:35.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.905 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:35.905 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:35.905 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:35.905 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:35.905 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.905 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.905 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.905 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.905 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.905 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.905 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:35.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.906 --rc genhtml_branch_coverage=1 00:12:35.906 --rc genhtml_function_coverage=1 00:12:35.906 --rc genhtml_legend=1 00:12:35.906 --rc geninfo_all_blocks=1 00:12:35.906 --rc geninfo_unexecuted_blocks=1 00:12:35.906 00:12:35.906 ' 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:35.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.906 --rc genhtml_branch_coverage=1 00:12:35.906 --rc genhtml_function_coverage=1 00:12:35.906 --rc genhtml_legend=1 00:12:35.906 --rc geninfo_all_blocks=1 00:12:35.906 --rc geninfo_unexecuted_blocks=1 00:12:35.906 00:12:35.906 ' 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:35.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.906 --rc genhtml_branch_coverage=1 00:12:35.906 --rc genhtml_function_coverage=1 00:12:35.906 --rc genhtml_legend=1 00:12:35.906 --rc geninfo_all_blocks=1 00:12:35.906 --rc geninfo_unexecuted_blocks=1 00:12:35.906 00:12:35.906 ' 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:35.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.906 --rc genhtml_branch_coverage=1 00:12:35.906 --rc genhtml_function_coverage=1 00:12:35.906 --rc genhtml_legend=1 00:12:35.906 --rc geninfo_all_blocks=1 00:12:35.906 --rc geninfo_unexecuted_blocks=1 00:12:35.906 00:12:35.906 ' 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.906 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:35.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:35.907 06:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:38.435 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:38.435 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:38.435 Found net devices under 0000:84:00.0: cvl_0_0 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.435 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:38.436 Found net devices under 0000:84:00.1: cvl_0_1 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:38.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:12:38.436 00:12:38.436 --- 10.0.0.2 ping statistics --- 00:12:38.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.436 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:12:38.436 00:12:38.436 --- 10.0.0.1 ping statistics --- 00:12:38.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.436 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1010183 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1010183 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1010183 ']' 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.436 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.436 [2024-12-08 06:16:28.386449] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:12:38.436 [2024-12-08 06:16:28.386542] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.436 [2024-12-08 06:16:28.457953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.436 [2024-12-08 06:16:28.514845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.436 [2024-12-08 06:16:28.514898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.436 [2024-12-08 06:16:28.514922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.436 [2024-12-08 06:16:28.514934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.436 [2024-12-08 06:16:28.514945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.436 [2024-12-08 06:16:28.516597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.436 [2024-12-08 06:16:28.516659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.436 [2024-12-08 06:16:28.516731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.436 [2024-12-08 06:16:28.516732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.697 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.697 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:38.697 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:38.697 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:38.697 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.697 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.697 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:38.697 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.697 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.697 [2024-12-08 06:16:28.665287] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.697 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.697 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:38.697 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:38.697 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 Null1 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 [2024-12-08 06:16:28.722929] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 Null2 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 Null3 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 Null4 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.698 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.957 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.957 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:38.957 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.957 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.957 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.957 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:38.957 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.957 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.957 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.957 06:16:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:12:38.957 00:12:38.958 Discovery Log Number of Records 6, Generation counter 6 00:12:38.958 =====Discovery Log Entry 0====== 00:12:38.958 trtype: tcp 00:12:38.958 adrfam: ipv4 00:12:38.958 subtype: current discovery subsystem 00:12:38.958 treq: not required 00:12:38.958 portid: 0 00:12:38.958 trsvcid: 4420 00:12:38.958 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:38.958 traddr: 10.0.0.2 00:12:38.958 eflags: explicit discovery connections, duplicate discovery information 00:12:38.958 sectype: none 00:12:38.958 =====Discovery Log Entry 1====== 00:12:38.958 trtype: tcp 00:12:38.958 adrfam: ipv4 00:12:38.958 subtype: nvme subsystem 00:12:38.958 treq: not required 00:12:38.958 portid: 0 00:12:38.958 trsvcid: 4420 00:12:38.958 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:38.958 traddr: 10.0.0.2 00:12:38.958 eflags: none 00:12:38.958 sectype: none 00:12:38.958 =====Discovery Log Entry 2====== 00:12:38.958 trtype: tcp 00:12:38.958 adrfam: ipv4 00:12:38.958 subtype: nvme subsystem 00:12:38.958 treq: not required 00:12:38.958 portid: 0 00:12:38.958 trsvcid: 4420 00:12:38.958 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:38.958 traddr: 10.0.0.2 00:12:38.958 eflags: none 00:12:38.958 sectype: none 00:12:38.958 =====Discovery Log Entry 3====== 00:12:38.958 trtype: tcp 00:12:38.958 adrfam: ipv4 00:12:38.958 subtype: nvme subsystem 00:12:38.958 treq: not required 00:12:38.958 portid: 0 00:12:38.958 trsvcid: 4420 00:12:38.958 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:38.958 traddr: 10.0.0.2 00:12:38.958 eflags: none 00:12:38.958 sectype: none 00:12:38.958 =====Discovery Log Entry 4====== 00:12:38.958 trtype: tcp 00:12:38.958 adrfam: ipv4 00:12:38.958 subtype: nvme subsystem 00:12:38.958 treq: not required 00:12:38.958 portid: 0 00:12:38.958 trsvcid: 4420 00:12:38.958 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:38.958 traddr: 10.0.0.2 00:12:38.958 eflags: none 00:12:38.958 sectype: none 00:12:38.958 =====Discovery Log Entry 5====== 00:12:38.958 trtype: tcp 00:12:38.958 adrfam: ipv4 00:12:38.958 subtype: discovery subsystem referral 00:12:38.958 treq: not required 00:12:38.958 portid: 0 00:12:38.958 trsvcid: 4430 00:12:38.958 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:38.958 traddr: 10.0.0.2 00:12:38.958 eflags: none 00:12:38.958 sectype: none 00:12:38.958 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:38.958 Perform nvmf subsystem discovery via RPC 00:12:38.958 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:38.958 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.958 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.958 [ 00:12:38.958 { 00:12:38.958 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:38.958 "subtype": "Discovery", 00:12:38.958 "listen_addresses": [ 00:12:38.958 { 00:12:38.958 "trtype": "TCP", 00:12:38.958 "adrfam": "IPv4", 00:12:38.958 "traddr": "10.0.0.2", 00:12:38.958 "trsvcid": "4420" 00:12:38.958 } 00:12:38.958 ], 00:12:38.958 "allow_any_host": true, 00:12:38.958 "hosts": [] 00:12:38.958 }, 00:12:38.958 { 00:12:38.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:38.958 "subtype": "NVMe", 00:12:38.958 "listen_addresses": [ 00:12:38.958 { 00:12:38.958 "trtype": "TCP", 00:12:38.958 "adrfam": "IPv4", 00:12:38.958 "traddr": "10.0.0.2", 00:12:38.958 "trsvcid": "4420" 00:12:38.958 } 00:12:38.958 ], 00:12:38.958 "allow_any_host": true, 00:12:38.958 "hosts": [], 00:12:38.958 "serial_number": "SPDK00000000000001", 00:12:38.958 "model_number": "SPDK bdev Controller", 00:12:38.958 "max_namespaces": 32, 00:12:38.958 "min_cntlid": 1, 00:12:38.958 "max_cntlid": 65519, 00:12:38.958 "namespaces": [ 00:12:38.958 { 00:12:38.958 "nsid": 1, 00:12:38.958 "bdev_name": "Null1", 00:12:38.958 "name": "Null1", 00:12:38.958 "nguid": "4AD6C04AB5A44ABBA632DEF2BB60E6A6", 00:12:38.958 "uuid": "4ad6c04a-b5a4-4abb-a632-def2bb60e6a6" 00:12:38.958 } 00:12:38.958 ] 00:12:38.958 }, 00:12:38.958 { 00:12:38.958 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:38.958 "subtype": "NVMe", 00:12:38.958 "listen_addresses": [ 00:12:38.958 { 00:12:38.958 "trtype": "TCP", 00:12:38.958 "adrfam": "IPv4", 00:12:38.958 "traddr": "10.0.0.2", 00:12:38.958 "trsvcid": "4420" 00:12:38.958 } 00:12:38.958 ], 00:12:38.958 "allow_any_host": true, 00:12:38.958 "hosts": [], 00:12:38.958 "serial_number": "SPDK00000000000002", 00:12:38.958 "model_number": "SPDK bdev Controller", 00:12:38.958 "max_namespaces": 32, 00:12:38.958 "min_cntlid": 1, 00:12:38.958 "max_cntlid": 65519, 00:12:38.958 "namespaces": [ 00:12:38.958 { 00:12:38.958 "nsid": 1, 00:12:38.958 "bdev_name": "Null2", 00:12:38.958 "name": "Null2", 00:12:38.958 "nguid": "48D122CEA7B34660964B5D8A31DFD471", 00:12:38.958 "uuid": "48d122ce-a7b3-4660-964b-5d8a31dfd471" 00:12:38.958 } 00:12:38.958 ] 00:12:38.958 }, 00:12:38.958 { 00:12:38.958 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:38.958 "subtype": "NVMe", 00:12:38.958 "listen_addresses": [ 00:12:38.958 { 00:12:38.958 "trtype": "TCP", 00:12:38.958 "adrfam": "IPv4", 00:12:38.958 "traddr": "10.0.0.2", 00:12:38.958 "trsvcid": "4420" 00:12:38.958 } 00:12:38.958 ], 00:12:38.958 "allow_any_host": true, 00:12:38.958 "hosts": [], 00:12:38.958 "serial_number": "SPDK00000000000003", 00:12:38.958 "model_number": "SPDK bdev Controller", 00:12:38.958 "max_namespaces": 32, 00:12:38.958 "min_cntlid": 1, 00:12:38.958 "max_cntlid": 65519, 00:12:38.958 "namespaces": [ 00:12:38.958 { 00:12:38.958 "nsid": 1, 00:12:38.958 "bdev_name": "Null3", 00:12:38.958 "name": "Null3", 00:12:38.958 "nguid": "FA87A8CF769D40729C22E37CED3112F5", 00:12:38.958 "uuid": "fa87a8cf-769d-4072-9c22-e37ced3112f5" 00:12:38.958 } 00:12:38.958 ] 00:12:38.958 }, 00:12:38.958 { 00:12:38.958 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:38.958 "subtype": "NVMe", 00:12:38.958 "listen_addresses": [ 00:12:38.958 { 00:12:38.958 "trtype": "TCP", 00:12:38.958 "adrfam": "IPv4", 00:12:38.958 "traddr": "10.0.0.2", 00:12:38.958 "trsvcid": "4420" 00:12:38.958 } 00:12:38.958 ], 00:12:38.958 "allow_any_host": true, 00:12:38.958 "hosts": [], 00:12:38.958 "serial_number": "SPDK00000000000004", 00:12:38.958 "model_number": "SPDK bdev Controller", 00:12:38.958 "max_namespaces": 32, 00:12:38.958 "min_cntlid": 1, 00:12:38.958 "max_cntlid": 65519, 00:12:38.958 "namespaces": [ 00:12:38.958 { 00:12:38.958 "nsid": 1, 00:12:38.958 "bdev_name": "Null4", 00:12:38.958 "name": "Null4", 00:12:38.958 "nguid": "DCA95C66455A421BB7D229695EF4467E", 00:12:38.958 "uuid": "dca95c66-455a-421b-b7d2-29695ef4467e" 00:12:38.958 } 00:12:38.958 ] 00:12:38.958 } 00:12:38.958 ] 00:12:38.958 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.958 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:38.958 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:38.958 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.958 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.958 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.958 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.959 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.216 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.216 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:39.216 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:39.216 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.216 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.216 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.216 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:39.216 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.216 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.216 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.216 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:39.216 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:39.217 rmmod nvme_tcp 00:12:39.217 rmmod nvme_fabrics 00:12:39.217 rmmod nvme_keyring 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1010183 ']' 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1010183 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1010183 ']' 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1010183 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1010183 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1010183' 00:12:39.217 killing process with pid 1010183 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1010183 00:12:39.217 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1010183 00:12:39.474 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:39.474 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:39.474 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:39.474 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:39.474 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:39.474 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:39.474 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:39.474 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:39.474 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:39.474 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.474 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.474 06:16:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.382 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:41.382 00:12:41.382 real 0m5.698s 00:12:41.382 user 0m4.637s 00:12:41.382 sys 0m2.031s 00:12:41.382 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.382 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:41.382 ************************************ 00:12:41.382 END TEST nvmf_target_discovery 00:12:41.382 ************************************ 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:41.642 ************************************ 00:12:41.642 START TEST nvmf_referrals 00:12:41.642 ************************************ 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:41.642 * Looking for test storage... 00:12:41.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.642 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:41.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.643 --rc genhtml_branch_coverage=1 00:12:41.643 --rc genhtml_function_coverage=1 00:12:41.643 --rc genhtml_legend=1 00:12:41.643 --rc geninfo_all_blocks=1 00:12:41.643 --rc geninfo_unexecuted_blocks=1 00:12:41.643 00:12:41.643 ' 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:41.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.643 --rc genhtml_branch_coverage=1 00:12:41.643 --rc genhtml_function_coverage=1 00:12:41.643 --rc genhtml_legend=1 00:12:41.643 --rc geninfo_all_blocks=1 00:12:41.643 --rc geninfo_unexecuted_blocks=1 00:12:41.643 00:12:41.643 ' 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:41.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.643 --rc genhtml_branch_coverage=1 00:12:41.643 --rc genhtml_function_coverage=1 00:12:41.643 --rc genhtml_legend=1 00:12:41.643 --rc geninfo_all_blocks=1 00:12:41.643 --rc geninfo_unexecuted_blocks=1 00:12:41.643 00:12:41.643 ' 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:41.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.643 --rc genhtml_branch_coverage=1 00:12:41.643 --rc genhtml_function_coverage=1 00:12:41.643 --rc genhtml_legend=1 00:12:41.643 --rc geninfo_all_blocks=1 00:12:41.643 --rc geninfo_unexecuted_blocks=1 00:12:41.643 00:12:41.643 ' 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:41.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:41.643 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.181 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:44.182 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:44.182 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:44.182 Found net devices under 0000:84:00.0: cvl_0_0 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:44.182 Found net devices under 0000:84:00.1: cvl_0_1 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.182 06:16:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:44.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:12:44.182 00:12:44.182 --- 10.0.0.2 ping statistics --- 00:12:44.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.182 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:12:44.182 00:12:44.182 --- 10.0.0.1 ping statistics --- 00:12:44.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.182 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.182 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1012296 00:12:44.183 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:44.183 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1012296 00:12:44.183 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1012296 ']' 00:12:44.183 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.183 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.183 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.183 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.183 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.183 [2024-12-08 06:16:34.138999] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:12:44.183 [2024-12-08 06:16:34.139106] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.183 [2024-12-08 06:16:34.210527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.183 [2024-12-08 06:16:34.266921] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.183 [2024-12-08 06:16:34.266984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.183 [2024-12-08 06:16:34.267009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.183 [2024-12-08 06:16:34.267021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.183 [2024-12-08 06:16:34.267046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.183 [2024-12-08 06:16:34.268790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.183 [2024-12-08 06:16:34.268856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.183 [2024-12-08 06:16:34.268920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.183 [2024-12-08 06:16:34.268923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.442 [2024-12-08 06:16:34.409376] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.442 [2024-12-08 06:16:34.436921] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:44.442 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.700 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.958 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:44.958 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:44.958 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:44.958 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:44.958 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:44.958 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:44.958 06:16:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:44.958 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:44.958 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:44.958 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:44.958 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.958 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.958 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.958 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:44.958 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.958 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.958 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.958 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:44.958 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:44.958 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:44.958 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:44.958 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.958 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.958 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:44.958 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.215 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:45.215 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:45.215 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:45.215 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:45.215 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:45.215 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.215 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:45.215 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:45.215 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:45.215 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:45.215 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:45.215 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:45.215 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:45.215 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.215 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:45.475 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:45.475 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:45.475 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:45.475 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:45.475 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.475 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:45.733 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:45.733 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:45.733 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.733 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.733 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.733 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:45.733 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:45.733 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:45.733 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:45.733 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.733 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.733 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:45.733 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.733 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:45.733 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:45.733 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:45.734 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:45.734 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:45.734 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.734 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:45.734 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:45.734 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:45.734 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:45.734 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:45.734 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:45.734 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:45.734 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.734 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:45.993 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:45.993 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:45.993 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:45.993 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:45.993 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.993 06:16:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:45.993 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:45.993 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:45.993 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.993 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.993 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.993 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:45.993 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.993 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:45.993 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.251 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.251 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:46.251 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:46.251 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:46.251 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:46.251 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:46.251 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:46.251 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:46.251 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:46.251 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:46.251 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:46.251 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:46.251 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:46.251 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:46.251 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:46.251 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:46.251 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:46.251 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:46.251 rmmod nvme_tcp 00:12:46.509 rmmod nvme_fabrics 00:12:46.509 rmmod nvme_keyring 00:12:46.509 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:46.509 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:46.509 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:46.509 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1012296 ']' 00:12:46.509 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1012296 00:12:46.509 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1012296 ']' 00:12:46.509 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1012296 00:12:46.509 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:46.509 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.509 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1012296 00:12:46.509 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.509 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.509 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1012296' 00:12:46.509 killing process with pid 1012296 00:12:46.509 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1012296 00:12:46.509 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1012296 00:12:46.769 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:46.769 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:46.769 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:46.769 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:46.769 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:46.769 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:46.769 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:46.769 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:46.769 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:46.769 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.769 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.769 06:16:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.677 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:48.677 00:12:48.677 real 0m7.193s 00:12:48.677 user 0m11.152s 00:12:48.677 sys 0m2.376s 00:12:48.677 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.677 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.677 ************************************ 00:12:48.677 END TEST nvmf_referrals 00:12:48.677 ************************************ 00:12:48.677 06:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:48.677 06:16:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:48.677 06:16:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.677 06:16:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:48.677 ************************************ 00:12:48.677 START TEST nvmf_connect_disconnect 00:12:48.677 ************************************ 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:48.995 * Looking for test storage... 00:12:48.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:48.995 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:48.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.996 --rc genhtml_branch_coverage=1 00:12:48.996 --rc genhtml_function_coverage=1 00:12:48.996 --rc genhtml_legend=1 00:12:48.996 --rc geninfo_all_blocks=1 00:12:48.996 --rc geninfo_unexecuted_blocks=1 00:12:48.996 00:12:48.996 ' 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:48.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.996 --rc genhtml_branch_coverage=1 00:12:48.996 --rc genhtml_function_coverage=1 00:12:48.996 --rc genhtml_legend=1 00:12:48.996 --rc geninfo_all_blocks=1 00:12:48.996 --rc geninfo_unexecuted_blocks=1 00:12:48.996 00:12:48.996 ' 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:48.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.996 --rc genhtml_branch_coverage=1 00:12:48.996 --rc genhtml_function_coverage=1 00:12:48.996 --rc genhtml_legend=1 00:12:48.996 --rc geninfo_all_blocks=1 00:12:48.996 --rc geninfo_unexecuted_blocks=1 00:12:48.996 00:12:48.996 ' 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:48.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.996 --rc genhtml_branch_coverage=1 00:12:48.996 --rc genhtml_function_coverage=1 00:12:48.996 --rc genhtml_legend=1 00:12:48.996 --rc geninfo_all_blocks=1 00:12:48.996 --rc geninfo_unexecuted_blocks=1 00:12:48.996 00:12:48.996 ' 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:48.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:48.996 06:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:50.900 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:50.900 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:50.900 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:50.900 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:50.900 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:50.900 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:50.900 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:51.159 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:51.159 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:51.159 Found net devices under 0000:84:00.0: cvl_0_0 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.159 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:51.160 Found net devices under 0000:84:00.1: cvl_0_1 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:51.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:12:51.160 00:12:51.160 --- 10.0.0.2 ping statistics --- 00:12:51.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.160 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:12:51.160 00:12:51.160 --- 10.0.0.1 ping statistics --- 00:12:51.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.160 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1014620 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1014620 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1014620 ']' 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.160 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.160 [2024-12-08 06:16:41.240393] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:12:51.160 [2024-12-08 06:16:41.240474] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.418 [2024-12-08 06:16:41.313482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.418 [2024-12-08 06:16:41.369558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.418 [2024-12-08 06:16:41.369613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.418 [2024-12-08 06:16:41.369635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.418 [2024-12-08 06:16:41.369645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.418 [2024-12-08 06:16:41.369655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.418 [2024-12-08 06:16:41.371276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.418 [2024-12-08 06:16:41.371342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.418 [2024-12-08 06:16:41.371405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.418 [2024-12-08 06:16:41.371408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.418 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.418 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:51.418 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:51.418 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:51.418 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.418 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.418 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:51.418 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.418 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.418 [2024-12-08 06:16:41.520024] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.418 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.418 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:51.418 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.418 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.676 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.676 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:51.676 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:51.676 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.676 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.676 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.676 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:51.676 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.676 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.676 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.676 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.676 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.676 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.676 [2024-12-08 06:16:41.595055] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.677 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.677 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:51.677 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:51.677 06:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:54.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.125 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:05.125 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:05.125 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:05.125 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:05.125 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:05.125 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:05.125 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:05.125 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:05.125 rmmod nvme_tcp 00:13:05.125 rmmod nvme_fabrics 00:13:05.125 rmmod nvme_keyring 00:13:05.398 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:05.398 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:05.398 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:05.398 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1014620 ']' 00:13:05.398 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1014620 00:13:05.398 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1014620 ']' 00:13:05.398 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1014620 00:13:05.398 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:05.398 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.398 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1014620 00:13:05.398 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:05.398 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:05.398 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1014620' 00:13:05.398 killing process with pid 1014620 00:13:05.398 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1014620 00:13:05.398 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1014620 00:13:05.669 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:05.669 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:05.669 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:05.669 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:05.669 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:05.669 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:05.669 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:05.669 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:05.669 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:05.669 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.669 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.669 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.604 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:07.604 00:13:07.604 real 0m18.818s 00:13:07.604 user 0m56.349s 00:13:07.604 sys 0m3.425s 00:13:07.604 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.604 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:07.604 ************************************ 00:13:07.604 END TEST nvmf_connect_disconnect 00:13:07.604 ************************************ 00:13:07.604 06:16:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:07.604 06:16:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:07.604 06:16:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.604 06:16:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:07.604 ************************************ 00:13:07.604 START TEST nvmf_multitarget 00:13:07.604 ************************************ 00:13:07.604 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:07.604 * Looking for test storage... 00:13:07.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:07.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.863 --rc genhtml_branch_coverage=1 00:13:07.863 --rc genhtml_function_coverage=1 00:13:07.863 --rc genhtml_legend=1 00:13:07.863 --rc geninfo_all_blocks=1 00:13:07.863 --rc geninfo_unexecuted_blocks=1 00:13:07.863 00:13:07.863 ' 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:07.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.863 --rc genhtml_branch_coverage=1 00:13:07.863 --rc genhtml_function_coverage=1 00:13:07.863 --rc genhtml_legend=1 00:13:07.863 --rc geninfo_all_blocks=1 00:13:07.863 --rc geninfo_unexecuted_blocks=1 00:13:07.863 00:13:07.863 ' 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:07.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.863 --rc genhtml_branch_coverage=1 00:13:07.863 --rc genhtml_function_coverage=1 00:13:07.863 --rc genhtml_legend=1 00:13:07.863 --rc geninfo_all_blocks=1 00:13:07.863 --rc geninfo_unexecuted_blocks=1 00:13:07.863 00:13:07.863 ' 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:07.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.863 --rc genhtml_branch_coverage=1 00:13:07.863 --rc genhtml_function_coverage=1 00:13:07.863 --rc genhtml_legend=1 00:13:07.863 --rc geninfo_all_blocks=1 00:13:07.863 --rc geninfo_unexecuted_blocks=1 00:13:07.863 00:13:07.863 ' 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.863 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:07.864 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:10.396 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:10.396 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:10.396 Found net devices under 0000:84:00.0: cvl_0_0 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:10.396 Found net devices under 0000:84:00.1: cvl_0_1 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:10.396 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:10.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:13:10.397 00:13:10.397 --- 10.0.0.2 ping statistics --- 00:13:10.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.397 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:10.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:13:10.397 00:13:10.397 --- 10.0.0.1 ping statistics --- 00:13:10.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.397 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1018395 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1018395 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1018395 ']' 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:10.397 [2024-12-08 06:17:00.232176] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:13:10.397 [2024-12-08 06:17:00.232271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.397 [2024-12-08 06:17:00.310574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.397 [2024-12-08 06:17:00.372507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.397 [2024-12-08 06:17:00.372580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.397 [2024-12-08 06:17:00.372594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.397 [2024-12-08 06:17:00.372605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.397 [2024-12-08 06:17:00.372614] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.397 [2024-12-08 06:17:00.374284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.397 [2024-12-08 06:17:00.374348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.397 [2024-12-08 06:17:00.374428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.397 [2024-12-08 06:17:00.374431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:10.397 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:10.656 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.656 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:10.656 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:10.656 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:10.656 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:10.656 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:10.656 "nvmf_tgt_1" 00:13:10.914 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:10.914 "nvmf_tgt_2" 00:13:10.914 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:10.914 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:10.914 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:10.914 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:11.173 true 00:13:11.173 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:11.173 true 00:13:11.173 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:11.173 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:11.433 rmmod nvme_tcp 00:13:11.433 rmmod nvme_fabrics 00:13:11.433 rmmod nvme_keyring 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1018395 ']' 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1018395 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1018395 ']' 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1018395 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018395 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018395' 00:13:11.433 killing process with pid 1018395 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1018395 00:13:11.433 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1018395 00:13:11.692 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:11.692 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:11.692 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:11.692 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:11.692 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:11.692 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:11.692 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:11.692 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:11.692 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:11.692 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.692 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.692 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:14.229 00:13:14.229 real 0m6.073s 00:13:14.229 user 0m6.907s 00:13:14.229 sys 0m2.146s 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:14.229 ************************************ 00:13:14.229 END TEST nvmf_multitarget 00:13:14.229 ************************************ 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.229 ************************************ 00:13:14.229 START TEST nvmf_rpc 00:13:14.229 ************************************ 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:14.229 * Looking for test storage... 00:13:14.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.229 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:14.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.230 --rc genhtml_branch_coverage=1 00:13:14.230 --rc genhtml_function_coverage=1 00:13:14.230 --rc genhtml_legend=1 00:13:14.230 --rc geninfo_all_blocks=1 00:13:14.230 --rc geninfo_unexecuted_blocks=1 00:13:14.230 00:13:14.230 ' 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:14.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.230 --rc genhtml_branch_coverage=1 00:13:14.230 --rc genhtml_function_coverage=1 00:13:14.230 --rc genhtml_legend=1 00:13:14.230 --rc geninfo_all_blocks=1 00:13:14.230 --rc geninfo_unexecuted_blocks=1 00:13:14.230 00:13:14.230 ' 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:14.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.230 --rc genhtml_branch_coverage=1 00:13:14.230 --rc genhtml_function_coverage=1 00:13:14.230 --rc genhtml_legend=1 00:13:14.230 --rc geninfo_all_blocks=1 00:13:14.230 --rc geninfo_unexecuted_blocks=1 00:13:14.230 00:13:14.230 ' 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:14.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.230 --rc genhtml_branch_coverage=1 00:13:14.230 --rc genhtml_function_coverage=1 00:13:14.230 --rc genhtml_legend=1 00:13:14.230 --rc geninfo_all_blocks=1 00:13:14.230 --rc geninfo_unexecuted_blocks=1 00:13:14.230 00:13:14.230 ' 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.230 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:14.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:14.231 06:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.138 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:16.139 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:16.139 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:16.139 Found net devices under 0000:84:00.0: cvl_0_0 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:16.139 Found net devices under 0000:84:00.1: cvl_0_1 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.139 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:16.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:13:16.396 00:13:16.396 --- 10.0.0.2 ping statistics --- 00:13:16.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.396 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:13:16.396 00:13:16.396 --- 10.0.0.1 ping statistics --- 00:13:16.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.396 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1020524 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1020524 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1020524 ']' 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:16.396 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.396 [2024-12-08 06:17:06.438306] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:13:16.396 [2024-12-08 06:17:06.438399] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.396 [2024-12-08 06:17:06.513329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.654 [2024-12-08 06:17:06.571245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.654 [2024-12-08 06:17:06.571307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.654 [2024-12-08 06:17:06.571330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.654 [2024-12-08 06:17:06.571341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.654 [2024-12-08 06:17:06.571350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.654 [2024-12-08 06:17:06.572985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.654 [2024-12-08 06:17:06.573069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.654 [2024-12-08 06:17:06.573200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.654 [2024-12-08 06:17:06.573218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.654 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.654 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:16.654 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:16.654 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:16.654 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.654 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.654 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:16.654 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.654 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.654 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.654 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:16.654 "tick_rate": 2700000000, 00:13:16.654 "poll_groups": [ 00:13:16.654 { 00:13:16.654 "name": "nvmf_tgt_poll_group_000", 00:13:16.654 "admin_qpairs": 0, 00:13:16.654 "io_qpairs": 0, 00:13:16.654 "current_admin_qpairs": 0, 00:13:16.654 "current_io_qpairs": 0, 00:13:16.654 "pending_bdev_io": 0, 00:13:16.654 "completed_nvme_io": 0, 00:13:16.654 "transports": [] 00:13:16.654 }, 00:13:16.654 { 00:13:16.654 "name": "nvmf_tgt_poll_group_001", 00:13:16.654 "admin_qpairs": 0, 00:13:16.654 "io_qpairs": 0, 00:13:16.654 "current_admin_qpairs": 0, 00:13:16.654 "current_io_qpairs": 0, 00:13:16.654 "pending_bdev_io": 0, 00:13:16.654 "completed_nvme_io": 0, 00:13:16.654 "transports": [] 00:13:16.654 }, 00:13:16.654 { 00:13:16.654 "name": "nvmf_tgt_poll_group_002", 00:13:16.654 "admin_qpairs": 0, 00:13:16.654 "io_qpairs": 0, 00:13:16.654 "current_admin_qpairs": 0, 00:13:16.654 "current_io_qpairs": 0, 00:13:16.654 "pending_bdev_io": 0, 00:13:16.654 "completed_nvme_io": 0, 00:13:16.654 "transports": [] 00:13:16.654 }, 00:13:16.654 { 00:13:16.654 "name": "nvmf_tgt_poll_group_003", 00:13:16.654 "admin_qpairs": 0, 00:13:16.654 "io_qpairs": 0, 00:13:16.654 "current_admin_qpairs": 0, 00:13:16.654 "current_io_qpairs": 0, 00:13:16.654 "pending_bdev_io": 0, 00:13:16.654 "completed_nvme_io": 0, 00:13:16.654 "transports": [] 00:13:16.654 } 00:13:16.654 ] 00:13:16.654 }' 00:13:16.654 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:16.654 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:16.654 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:16.654 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:16.654 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.912 [2024-12-08 06:17:06.816789] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:16.912 "tick_rate": 2700000000, 00:13:16.912 "poll_groups": [ 00:13:16.912 { 00:13:16.912 "name": "nvmf_tgt_poll_group_000", 00:13:16.912 "admin_qpairs": 0, 00:13:16.912 "io_qpairs": 0, 00:13:16.912 "current_admin_qpairs": 0, 00:13:16.912 "current_io_qpairs": 0, 00:13:16.912 "pending_bdev_io": 0, 00:13:16.912 "completed_nvme_io": 0, 00:13:16.912 "transports": [ 00:13:16.912 { 00:13:16.912 "trtype": "TCP" 00:13:16.912 } 00:13:16.912 ] 00:13:16.912 }, 00:13:16.912 { 00:13:16.912 "name": "nvmf_tgt_poll_group_001", 00:13:16.912 "admin_qpairs": 0, 00:13:16.912 "io_qpairs": 0, 00:13:16.912 "current_admin_qpairs": 0, 00:13:16.912 "current_io_qpairs": 0, 00:13:16.912 "pending_bdev_io": 0, 00:13:16.912 "completed_nvme_io": 0, 00:13:16.912 "transports": [ 00:13:16.912 { 00:13:16.912 "trtype": "TCP" 00:13:16.912 } 00:13:16.912 ] 00:13:16.912 }, 00:13:16.912 { 00:13:16.912 "name": "nvmf_tgt_poll_group_002", 00:13:16.912 "admin_qpairs": 0, 00:13:16.912 "io_qpairs": 0, 00:13:16.912 "current_admin_qpairs": 0, 00:13:16.912 "current_io_qpairs": 0, 00:13:16.912 "pending_bdev_io": 0, 00:13:16.912 "completed_nvme_io": 0, 00:13:16.912 "transports": [ 00:13:16.912 { 00:13:16.912 "trtype": "TCP" 00:13:16.912 } 00:13:16.912 ] 00:13:16.912 }, 00:13:16.912 { 00:13:16.912 "name": "nvmf_tgt_poll_group_003", 00:13:16.912 "admin_qpairs": 0, 00:13:16.912 "io_qpairs": 0, 00:13:16.912 "current_admin_qpairs": 0, 00:13:16.912 "current_io_qpairs": 0, 00:13:16.912 "pending_bdev_io": 0, 00:13:16.912 "completed_nvme_io": 0, 00:13:16.912 "transports": [ 00:13:16.912 { 00:13:16.912 "trtype": "TCP" 00:13:16.912 } 00:13:16.912 ] 00:13:16.912 } 00:13:16.912 ] 00:13:16.912 }' 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.912 Malloc1 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.912 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.913 [2024-12-08 06:17:06.989582] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:16.913 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:13:16.913 [2024-12-08 06:17:07.012169] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:13:17.172 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:17.172 could not add new controller: failed to write to nvme-fabrics device 00:13:17.172 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:17.172 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:17.172 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:17.172 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:17.172 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:17.172 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.172 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.172 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.172 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.737 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.737 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:17.737 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.737 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:17.737 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:19.639 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:19.640 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:19.640 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.640 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:19.640 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.640 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:19.640 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.640 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:19.640 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:19.640 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:19.640 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.901 [2024-12-08 06:17:09.803975] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:13:19.901 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:19.901 could not add new controller: failed to write to nvme-fabrics device 00:13:19.901 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:19.902 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:19.902 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:19.902 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:19.902 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:19.902 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.902 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.902 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.902 06:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:20.470 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:20.470 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:20.470 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:20.470 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:20.470 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:22.382 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:22.382 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:22.382 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.382 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:22.382 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.382 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:22.382 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.643 [2024-12-08 06:17:12.596959] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.643 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:23.210 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:23.210 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:23.210 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:23.210 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:23.210 06:17:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.743 [2024-12-08 06:17:15.430939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.743 06:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.307 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:26.307 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:26.307 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.307 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:26.307 06:17:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:28.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.210 [2024-12-08 06:17:18.252159] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.210 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:29.168 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:29.168 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:29.168 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.168 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:29.168 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:31.071 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:31.072 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:31.072 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:31.072 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:31.072 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.072 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:31.072 06:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:31.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.072 [2024-12-08 06:17:21.066323] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.072 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:31.639 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:31.639 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:31.639 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.639 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:31.639 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.175 [2024-12-08 06:17:23.878754] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.175 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:34.748 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:34.748 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:34.748 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.748 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:34.748 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:36.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:36.697 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.698 [2024-12-08 06:17:26.704055] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.698 [2024-12-08 06:17:26.752090] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.698 [2024-12-08 06:17:26.800230] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.698 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.957 [2024-12-08 06:17:26.848402] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.957 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.958 [2024-12-08 06:17:26.896571] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:36.958 "tick_rate": 2700000000, 00:13:36.958 "poll_groups": [ 00:13:36.958 { 00:13:36.958 "name": "nvmf_tgt_poll_group_000", 00:13:36.958 "admin_qpairs": 2, 00:13:36.958 "io_qpairs": 84, 00:13:36.958 "current_admin_qpairs": 0, 00:13:36.958 "current_io_qpairs": 0, 00:13:36.958 "pending_bdev_io": 0, 00:13:36.958 "completed_nvme_io": 183, 00:13:36.958 "transports": [ 00:13:36.958 { 00:13:36.958 "trtype": "TCP" 00:13:36.958 } 00:13:36.958 ] 00:13:36.958 }, 00:13:36.958 { 00:13:36.958 "name": "nvmf_tgt_poll_group_001", 00:13:36.958 "admin_qpairs": 2, 00:13:36.958 "io_qpairs": 84, 00:13:36.958 "current_admin_qpairs": 0, 00:13:36.958 "current_io_qpairs": 0, 00:13:36.958 "pending_bdev_io": 0, 00:13:36.958 "completed_nvme_io": 134, 00:13:36.958 "transports": [ 00:13:36.958 { 00:13:36.958 "trtype": "TCP" 00:13:36.958 } 00:13:36.958 ] 00:13:36.958 }, 00:13:36.958 { 00:13:36.958 "name": "nvmf_tgt_poll_group_002", 00:13:36.958 "admin_qpairs": 1, 00:13:36.958 "io_qpairs": 84, 00:13:36.958 "current_admin_qpairs": 0, 00:13:36.958 "current_io_qpairs": 0, 00:13:36.958 "pending_bdev_io": 0, 00:13:36.958 "completed_nvme_io": 196, 00:13:36.958 "transports": [ 00:13:36.958 { 00:13:36.958 "trtype": "TCP" 00:13:36.958 } 00:13:36.958 ] 00:13:36.958 }, 00:13:36.958 { 00:13:36.958 "name": "nvmf_tgt_poll_group_003", 00:13:36.958 "admin_qpairs": 2, 00:13:36.958 "io_qpairs": 84, 00:13:36.958 "current_admin_qpairs": 0, 00:13:36.958 "current_io_qpairs": 0, 00:13:36.958 "pending_bdev_io": 0, 00:13:36.958 "completed_nvme_io": 173, 00:13:36.958 "transports": [ 00:13:36.958 { 00:13:36.958 "trtype": "TCP" 00:13:36.958 } 00:13:36.958 ] 00:13:36.958 } 00:13:36.958 ] 00:13:36.958 }' 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:36.958 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:36.958 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:36.958 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:36.958 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:36.958 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:36.958 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:36.958 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:36.958 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:36.958 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:36.958 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:36.958 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:36.958 rmmod nvme_tcp 00:13:36.958 rmmod nvme_fabrics 00:13:36.958 rmmod nvme_keyring 00:13:37.217 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:37.217 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:37.217 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:37.217 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1020524 ']' 00:13:37.217 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1020524 00:13:37.217 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1020524 ']' 00:13:37.217 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1020524 00:13:37.217 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:37.217 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:37.217 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1020524 00:13:37.217 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:37.217 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:37.217 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1020524' 00:13:37.217 killing process with pid 1020524 00:13:37.217 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1020524 00:13:37.217 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1020524 00:13:37.477 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:37.477 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:37.477 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:37.477 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:37.477 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:37.477 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:37.477 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:37.477 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:37.477 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:37.477 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.477 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:37.477 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.383 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:39.383 00:13:39.383 real 0m25.646s 00:13:39.383 user 1m22.565s 00:13:39.383 sys 0m4.377s 00:13:39.383 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.383 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.383 ************************************ 00:13:39.383 END TEST nvmf_rpc 00:13:39.383 ************************************ 00:13:39.383 06:17:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:39.383 06:17:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:39.383 06:17:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.383 06:17:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:39.383 ************************************ 00:13:39.383 START TEST nvmf_invalid 00:13:39.383 ************************************ 00:13:39.383 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:39.641 * Looking for test storage... 00:13:39.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:39.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.641 --rc genhtml_branch_coverage=1 00:13:39.641 --rc genhtml_function_coverage=1 00:13:39.641 --rc genhtml_legend=1 00:13:39.641 --rc geninfo_all_blocks=1 00:13:39.641 --rc geninfo_unexecuted_blocks=1 00:13:39.641 00:13:39.641 ' 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:39.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.641 --rc genhtml_branch_coverage=1 00:13:39.641 --rc genhtml_function_coverage=1 00:13:39.641 --rc genhtml_legend=1 00:13:39.641 --rc geninfo_all_blocks=1 00:13:39.641 --rc geninfo_unexecuted_blocks=1 00:13:39.641 00:13:39.641 ' 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:39.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.641 --rc genhtml_branch_coverage=1 00:13:39.641 --rc genhtml_function_coverage=1 00:13:39.641 --rc genhtml_legend=1 00:13:39.641 --rc geninfo_all_blocks=1 00:13:39.641 --rc geninfo_unexecuted_blocks=1 00:13:39.641 00:13:39.641 ' 00:13:39.641 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:39.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.641 --rc genhtml_branch_coverage=1 00:13:39.642 --rc genhtml_function_coverage=1 00:13:39.642 --rc genhtml_legend=1 00:13:39.642 --rc geninfo_all_blocks=1 00:13:39.642 --rc geninfo_unexecuted_blocks=1 00:13:39.642 00:13:39.642 ' 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:39.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:39.642 06:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:42.178 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:42.179 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:42.179 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:42.179 Found net devices under 0000:84:00.0: cvl_0_0 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:42.179 Found net devices under 0000:84:00.1: cvl_0_1 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.179 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:42.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:13:42.179 00:13:42.179 --- 10.0.0.2 ping statistics --- 00:13:42.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.179 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:13:42.179 00:13:42.179 --- 10.0.0.1 ping statistics --- 00:13:42.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.179 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1025088 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1025088 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1025088 ']' 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.179 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:42.179 [2024-12-08 06:17:32.098455] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:13:42.179 [2024-12-08 06:17:32.098531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.179 [2024-12-08 06:17:32.172803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:42.179 [2024-12-08 06:17:32.232430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.179 [2024-12-08 06:17:32.232510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.179 [2024-12-08 06:17:32.232524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.180 [2024-12-08 06:17:32.232535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.180 [2024-12-08 06:17:32.232545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.180 [2024-12-08 06:17:32.234317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.180 [2024-12-08 06:17:32.234380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.180 [2024-12-08 06:17:32.234444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.180 [2024-12-08 06:17:32.234447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.438 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.438 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:42.438 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:42.438 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:42.438 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:42.438 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.438 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:42.438 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode4793 00:13:42.695 [2024-12-08 06:17:32.695260] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:42.695 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:42.695 { 00:13:42.695 "nqn": "nqn.2016-06.io.spdk:cnode4793", 00:13:42.695 "tgt_name": "foobar", 00:13:42.695 "method": "nvmf_create_subsystem", 00:13:42.695 "req_id": 1 00:13:42.695 } 00:13:42.695 Got JSON-RPC error response 00:13:42.695 response: 00:13:42.695 { 00:13:42.695 "code": -32603, 00:13:42.695 "message": "Unable to find target foobar" 00:13:42.695 }' 00:13:42.696 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:42.696 { 00:13:42.696 "nqn": "nqn.2016-06.io.spdk:cnode4793", 00:13:42.696 "tgt_name": "foobar", 00:13:42.696 "method": "nvmf_create_subsystem", 00:13:42.696 "req_id": 1 00:13:42.696 } 00:13:42.696 Got JSON-RPC error response 00:13:42.696 response: 00:13:42.696 { 00:13:42.696 "code": -32603, 00:13:42.696 "message": "Unable to find target foobar" 00:13:42.696 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:42.696 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:42.696 06:17:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26540 00:13:42.954 [2024-12-08 06:17:32.984243] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26540: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:42.954 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:42.954 { 00:13:42.954 "nqn": "nqn.2016-06.io.spdk:cnode26540", 00:13:42.954 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:42.954 "method": "nvmf_create_subsystem", 00:13:42.954 "req_id": 1 00:13:42.954 } 00:13:42.954 Got JSON-RPC error response 00:13:42.954 response: 00:13:42.954 { 00:13:42.954 "code": -32602, 00:13:42.954 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:42.954 }' 00:13:42.954 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:42.954 { 00:13:42.954 "nqn": "nqn.2016-06.io.spdk:cnode26540", 00:13:42.954 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:42.954 "method": "nvmf_create_subsystem", 00:13:42.954 "req_id": 1 00:13:42.954 } 00:13:42.954 Got JSON-RPC error response 00:13:42.954 response: 00:13:42.954 { 00:13:42.954 "code": -32602, 00:13:42.954 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:42.954 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:42.954 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:42.954 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode8724 00:13:43.212 [2024-12-08 06:17:33.273227] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8724: invalid model number 'SPDK_Controller' 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:43.212 { 00:13:43.212 "nqn": "nqn.2016-06.io.spdk:cnode8724", 00:13:43.212 "model_number": "SPDK_Controller\u001f", 00:13:43.212 "method": "nvmf_create_subsystem", 00:13:43.212 "req_id": 1 00:13:43.212 } 00:13:43.212 Got JSON-RPC error response 00:13:43.212 response: 00:13:43.212 { 00:13:43.212 "code": -32602, 00:13:43.212 "message": "Invalid MN SPDK_Controller\u001f" 00:13:43.212 }' 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:43.212 { 00:13:43.212 "nqn": "nqn.2016-06.io.spdk:cnode8724", 00:13:43.212 "model_number": "SPDK_Controller\u001f", 00:13:43.212 "method": "nvmf_create_subsystem", 00:13:43.212 "req_id": 1 00:13:43.212 } 00:13:43.212 Got JSON-RPC error response 00:13:43.212 response: 00:13:43.212 { 00:13:43.212 "code": -32602, 00:13:43.212 "message": "Invalid MN SPDK_Controller\u001f" 00:13:43.212 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.212 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.213 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:43.471 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:43.471 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'n.Z37~9Qm'\''1+;w5=0Na/W' 00:13:43.472 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'n.Z37~9Qm'\''1+;w5=0Na/W' nqn.2016-06.io.spdk:cnode5577 00:13:43.732 [2024-12-08 06:17:33.622341] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5577: invalid serial number 'n.Z37~9Qm'1+;w5=0Na/W' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:43.732 { 00:13:43.732 "nqn": "nqn.2016-06.io.spdk:cnode5577", 00:13:43.732 "serial_number": "n.Z37~9Qm'\''1+;w5=0Na/W", 00:13:43.732 "method": "nvmf_create_subsystem", 00:13:43.732 "req_id": 1 00:13:43.732 } 00:13:43.732 Got JSON-RPC error response 00:13:43.732 response: 00:13:43.732 { 00:13:43.732 "code": -32602, 00:13:43.732 "message": "Invalid SN n.Z37~9Qm'\''1+;w5=0Na/W" 00:13:43.732 }' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:43.732 { 00:13:43.732 "nqn": "nqn.2016-06.io.spdk:cnode5577", 00:13:43.732 "serial_number": "n.Z37~9Qm'1+;w5=0Na/W", 00:13:43.732 "method": "nvmf_create_subsystem", 00:13:43.732 "req_id": 1 00:13:43.732 } 00:13:43.732 Got JSON-RPC error response 00:13:43.732 response: 00:13:43.732 { 00:13:43.732 "code": -32602, 00:13:43.732 "message": "Invalid SN n.Z37~9Qm'1+;w5=0Na/W" 00:13:43.732 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.732 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:43.733 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ j == \- ]] 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'j'\''gFVkR8K~AQ(F4:U0/AGLPN2s5:yf x)r]i[3fj~' 00:13:43.734 06:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'j'\''gFVkR8K~AQ(F4:U0/AGLPN2s5:yf x)r]i[3fj~' nqn.2016-06.io.spdk:cnode19513 00:13:43.992 [2024-12-08 06:17:34.071773] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19513: invalid model number 'j'gFVkR8K~AQ(F4:U0/AGLPN2s5:yf x)r]i[3fj~' 00:13:43.992 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:43.992 { 00:13:43.992 "nqn": "nqn.2016-06.io.spdk:cnode19513", 00:13:43.992 "model_number": "j'\''gFVkR8K~AQ(F4:U0/AGLPN2s5:yf x)r]i[3fj~", 00:13:43.992 "method": "nvmf_create_subsystem", 00:13:43.992 "req_id": 1 00:13:43.992 } 00:13:43.992 Got JSON-RPC error response 00:13:43.992 response: 00:13:43.992 { 00:13:43.992 "code": -32602, 00:13:43.992 "message": "Invalid MN j'\''gFVkR8K~AQ(F4:U0/AGLPN2s5:yf x)r]i[3fj~" 00:13:43.992 }' 00:13:43.992 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:43.992 { 00:13:43.992 "nqn": "nqn.2016-06.io.spdk:cnode19513", 00:13:43.992 "model_number": "j'gFVkR8K~AQ(F4:U0/AGLPN2s5:yf x)r]i[3fj~", 00:13:43.992 "method": "nvmf_create_subsystem", 00:13:43.992 "req_id": 1 00:13:43.992 } 00:13:43.992 Got JSON-RPC error response 00:13:43.992 response: 00:13:43.992 { 00:13:43.992 "code": -32602, 00:13:43.992 "message": "Invalid MN j'gFVkR8K~AQ(F4:U0/AGLPN2s5:yf x)r]i[3fj~" 00:13:43.992 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:43.992 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:44.252 [2024-12-08 06:17:34.356794] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.513 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:44.771 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:44.771 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:44.771 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:44.771 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:44.771 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:45.030 [2024-12-08 06:17:34.918600] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:45.030 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:45.030 { 00:13:45.030 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:45.030 "listen_address": { 00:13:45.030 "trtype": "tcp", 00:13:45.030 "traddr": "", 00:13:45.030 "trsvcid": "4421" 00:13:45.030 }, 00:13:45.030 "method": "nvmf_subsystem_remove_listener", 00:13:45.030 "req_id": 1 00:13:45.030 } 00:13:45.030 Got JSON-RPC error response 00:13:45.030 response: 00:13:45.030 { 00:13:45.030 "code": -32602, 00:13:45.030 "message": "Invalid parameters" 00:13:45.030 }' 00:13:45.030 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:45.030 { 00:13:45.030 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:45.030 "listen_address": { 00:13:45.030 "trtype": "tcp", 00:13:45.030 "traddr": "", 00:13:45.030 "trsvcid": "4421" 00:13:45.030 }, 00:13:45.030 "method": "nvmf_subsystem_remove_listener", 00:13:45.030 "req_id": 1 00:13:45.030 } 00:13:45.030 Got JSON-RPC error response 00:13:45.030 response: 00:13:45.030 { 00:13:45.030 "code": -32602, 00:13:45.030 "message": "Invalid parameters" 00:13:45.030 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:45.030 06:17:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5199 -i 0 00:13:45.287 [2024-12-08 06:17:35.183439] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5199: invalid cntlid range [0-65519] 00:13:45.287 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:45.287 { 00:13:45.287 "nqn": "nqn.2016-06.io.spdk:cnode5199", 00:13:45.287 "min_cntlid": 0, 00:13:45.287 "method": "nvmf_create_subsystem", 00:13:45.287 "req_id": 1 00:13:45.287 } 00:13:45.287 Got JSON-RPC error response 00:13:45.287 response: 00:13:45.287 { 00:13:45.287 "code": -32602, 00:13:45.287 "message": "Invalid cntlid range [0-65519]" 00:13:45.287 }' 00:13:45.287 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:45.287 { 00:13:45.287 "nqn": "nqn.2016-06.io.spdk:cnode5199", 00:13:45.287 "min_cntlid": 0, 00:13:45.287 "method": "nvmf_create_subsystem", 00:13:45.287 "req_id": 1 00:13:45.287 } 00:13:45.287 Got JSON-RPC error response 00:13:45.287 response: 00:13:45.287 { 00:13:45.287 "code": -32602, 00:13:45.287 "message": "Invalid cntlid range [0-65519]" 00:13:45.287 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:45.287 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode579 -i 65520 00:13:45.543 [2024-12-08 06:17:35.468449] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode579: invalid cntlid range [65520-65519] 00:13:45.543 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:45.543 { 00:13:45.543 "nqn": "nqn.2016-06.io.spdk:cnode579", 00:13:45.543 "min_cntlid": 65520, 00:13:45.543 "method": "nvmf_create_subsystem", 00:13:45.543 "req_id": 1 00:13:45.543 } 00:13:45.543 Got JSON-RPC error response 00:13:45.543 response: 00:13:45.543 { 00:13:45.543 "code": -32602, 00:13:45.543 "message": "Invalid cntlid range [65520-65519]" 00:13:45.543 }' 00:13:45.543 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:45.543 { 00:13:45.543 "nqn": "nqn.2016-06.io.spdk:cnode579", 00:13:45.543 "min_cntlid": 65520, 00:13:45.543 "method": "nvmf_create_subsystem", 00:13:45.543 "req_id": 1 00:13:45.543 } 00:13:45.543 Got JSON-RPC error response 00:13:45.543 response: 00:13:45.543 { 00:13:45.543 "code": -32602, 00:13:45.543 "message": "Invalid cntlid range [65520-65519]" 00:13:45.544 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:45.544 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32530 -I 0 00:13:45.801 [2024-12-08 06:17:35.737265] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32530: invalid cntlid range [1-0] 00:13:45.801 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:45.801 { 00:13:45.801 "nqn": "nqn.2016-06.io.spdk:cnode32530", 00:13:45.801 "max_cntlid": 0, 00:13:45.801 "method": "nvmf_create_subsystem", 00:13:45.801 "req_id": 1 00:13:45.801 } 00:13:45.801 Got JSON-RPC error response 00:13:45.801 response: 00:13:45.801 { 00:13:45.801 "code": -32602, 00:13:45.801 "message": "Invalid cntlid range [1-0]" 00:13:45.801 }' 00:13:45.801 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:45.801 { 00:13:45.801 "nqn": "nqn.2016-06.io.spdk:cnode32530", 00:13:45.801 "max_cntlid": 0, 00:13:45.801 "method": "nvmf_create_subsystem", 00:13:45.801 "req_id": 1 00:13:45.801 } 00:13:45.801 Got JSON-RPC error response 00:13:45.801 response: 00:13:45.801 { 00:13:45.801 "code": -32602, 00:13:45.801 "message": "Invalid cntlid range [1-0]" 00:13:45.801 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:45.801 06:17:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27003 -I 65520 00:13:46.058 [2024-12-08 06:17:36.018203] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27003: invalid cntlid range [1-65520] 00:13:46.058 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:46.058 { 00:13:46.058 "nqn": "nqn.2016-06.io.spdk:cnode27003", 00:13:46.058 "max_cntlid": 65520, 00:13:46.058 "method": "nvmf_create_subsystem", 00:13:46.058 "req_id": 1 00:13:46.058 } 00:13:46.058 Got JSON-RPC error response 00:13:46.058 response: 00:13:46.058 { 00:13:46.058 "code": -32602, 00:13:46.058 "message": "Invalid cntlid range [1-65520]" 00:13:46.058 }' 00:13:46.058 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:46.058 { 00:13:46.058 "nqn": "nqn.2016-06.io.spdk:cnode27003", 00:13:46.058 "max_cntlid": 65520, 00:13:46.058 "method": "nvmf_create_subsystem", 00:13:46.058 "req_id": 1 00:13:46.058 } 00:13:46.058 Got JSON-RPC error response 00:13:46.058 response: 00:13:46.058 { 00:13:46.058 "code": -32602, 00:13:46.058 "message": "Invalid cntlid range [1-65520]" 00:13:46.058 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:46.058 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8756 -i 6 -I 5 00:13:46.315 [2024-12-08 06:17:36.283108] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8756: invalid cntlid range [6-5] 00:13:46.315 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:46.315 { 00:13:46.315 "nqn": "nqn.2016-06.io.spdk:cnode8756", 00:13:46.315 "min_cntlid": 6, 00:13:46.315 "max_cntlid": 5, 00:13:46.315 "method": "nvmf_create_subsystem", 00:13:46.315 "req_id": 1 00:13:46.315 } 00:13:46.315 Got JSON-RPC error response 00:13:46.315 response: 00:13:46.315 { 00:13:46.315 "code": -32602, 00:13:46.315 "message": "Invalid cntlid range [6-5]" 00:13:46.315 }' 00:13:46.315 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:46.315 { 00:13:46.315 "nqn": "nqn.2016-06.io.spdk:cnode8756", 00:13:46.315 "min_cntlid": 6, 00:13:46.315 "max_cntlid": 5, 00:13:46.315 "method": "nvmf_create_subsystem", 00:13:46.315 "req_id": 1 00:13:46.315 } 00:13:46.315 Got JSON-RPC error response 00:13:46.315 response: 00:13:46.315 { 00:13:46.315 "code": -32602, 00:13:46.315 "message": "Invalid cntlid range [6-5]" 00:13:46.315 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:46.315 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:46.315 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:46.315 { 00:13:46.315 "name": "foobar", 00:13:46.315 "method": "nvmf_delete_target", 00:13:46.315 "req_id": 1 00:13:46.315 } 00:13:46.315 Got JSON-RPC error response 00:13:46.315 response: 00:13:46.315 { 00:13:46.315 "code": -32602, 00:13:46.315 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:46.315 }' 00:13:46.315 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:46.315 { 00:13:46.315 "name": "foobar", 00:13:46.315 "method": "nvmf_delete_target", 00:13:46.315 "req_id": 1 00:13:46.315 } 00:13:46.315 Got JSON-RPC error response 00:13:46.315 response: 00:13:46.315 { 00:13:46.315 "code": -32602, 00:13:46.315 "message": "The specified target doesn't exist, cannot delete it." 00:13:46.315 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:46.316 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:46.316 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:46.316 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:46.316 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:46.316 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:46.316 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:46.316 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:46.316 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:46.316 rmmod nvme_tcp 00:13:46.575 rmmod nvme_fabrics 00:13:46.575 rmmod nvme_keyring 00:13:46.575 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:46.575 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:46.575 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:46.575 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1025088 ']' 00:13:46.575 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1025088 00:13:46.575 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1025088 ']' 00:13:46.575 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1025088 00:13:46.575 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:46.575 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.575 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1025088 00:13:46.575 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:46.575 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:46.575 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1025088' 00:13:46.575 killing process with pid 1025088 00:13:46.575 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1025088 00:13:46.575 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1025088 00:13:46.835 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:46.835 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:46.835 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:46.835 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:46.835 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:46.835 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:46.835 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:46.835 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:46.835 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:46.835 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.835 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.835 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.746 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:48.746 00:13:48.746 real 0m9.304s 00:13:48.746 user 0m22.219s 00:13:48.746 sys 0m2.644s 00:13:48.746 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.746 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:48.746 ************************************ 00:13:48.746 END TEST nvmf_invalid 00:13:48.746 ************************************ 00:13:48.746 06:17:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:48.746 06:17:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:48.746 06:17:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.746 06:17:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:48.746 ************************************ 00:13:48.746 START TEST nvmf_connect_stress 00:13:48.746 ************************************ 00:13:48.746 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:49.005 * Looking for test storage... 00:13:49.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:49.005 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:49.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.005 --rc genhtml_branch_coverage=1 00:13:49.005 --rc genhtml_function_coverage=1 00:13:49.005 --rc genhtml_legend=1 00:13:49.005 --rc geninfo_all_blocks=1 00:13:49.005 --rc geninfo_unexecuted_blocks=1 00:13:49.005 00:13:49.005 ' 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:49.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.005 --rc genhtml_branch_coverage=1 00:13:49.005 --rc genhtml_function_coverage=1 00:13:49.005 --rc genhtml_legend=1 00:13:49.005 --rc geninfo_all_blocks=1 00:13:49.005 --rc geninfo_unexecuted_blocks=1 00:13:49.005 00:13:49.005 ' 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:49.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.005 --rc genhtml_branch_coverage=1 00:13:49.005 --rc genhtml_function_coverage=1 00:13:49.005 --rc genhtml_legend=1 00:13:49.005 --rc geninfo_all_blocks=1 00:13:49.005 --rc geninfo_unexecuted_blocks=1 00:13:49.005 00:13:49.005 ' 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:49.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.005 --rc genhtml_branch_coverage=1 00:13:49.005 --rc genhtml_function_coverage=1 00:13:49.005 --rc genhtml_legend=1 00:13:49.005 --rc geninfo_all_blocks=1 00:13:49.005 --rc geninfo_unexecuted_blocks=1 00:13:49.005 00:13:49.005 ' 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.005 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:49.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:49.006 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:51.542 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:51.543 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:51.543 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:51.543 Found net devices under 0000:84:00.0: cvl_0_0 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:51.543 Found net devices under 0000:84:00.1: cvl_0_1 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:51.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:13:51.543 00:13:51.543 --- 10.0.0.2 ping statistics --- 00:13:51.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.543 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:51.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:13:51.543 00:13:51.543 --- 10.0.0.1 ping statistics --- 00:13:51.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.543 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1027824 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1027824 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1027824 ']' 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.543 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.544 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.544 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.544 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.544 [2024-12-08 06:17:41.417106] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:13:51.544 [2024-12-08 06:17:41.417194] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.544 [2024-12-08 06:17:41.488066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:51.544 [2024-12-08 06:17:41.544424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.544 [2024-12-08 06:17:41.544492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.544 [2024-12-08 06:17:41.544508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.544 [2024-12-08 06:17:41.544519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.544 [2024-12-08 06:17:41.544528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.544 [2024-12-08 06:17:41.546130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.544 [2024-12-08 06:17:41.546195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.544 [2024-12-08 06:17:41.546199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.804 [2024-12-08 06:17:41.695968] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.804 [2024-12-08 06:17:41.713290] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.804 NULL1 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1027847 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.804 06:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.064 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.064 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:52.064 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.064 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.064 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.325 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.325 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:52.325 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.325 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.325 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.892 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.892 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:52.892 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.892 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.892 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.150 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.150 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:53.150 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.150 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.150 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.410 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.410 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:53.410 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.410 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.410 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.671 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.671 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:53.671 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.671 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.671 06:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.929 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.929 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:53.929 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.929 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.929 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.497 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.497 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:54.497 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.497 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.497 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.759 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.759 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:54.759 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.759 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.759 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.019 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.019 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:55.019 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.019 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.019 06:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.277 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.277 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:55.277 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.277 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.277 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.534 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.534 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:55.534 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.534 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.534 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.102 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.102 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:56.102 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.102 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.102 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.361 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.361 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:56.361 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.361 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.361 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.620 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.620 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:56.620 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.620 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.620 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.878 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.878 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:56.878 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.878 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.878 06:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.160 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.160 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:57.160 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.160 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.160 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.729 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.729 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:57.729 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.729 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.729 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.988 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.988 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:57.988 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.988 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.988 06:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.247 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.247 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:58.247 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.247 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.247 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.508 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.508 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:58.508 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.508 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.508 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.767 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.767 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:58.767 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.767 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.767 06:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.336 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.336 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:59.336 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.336 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.336 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.594 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.594 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:59.594 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.594 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.594 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.853 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.853 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:13:59.853 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.853 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.853 06:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.112 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.112 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:14:00.112 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.112 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.112 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.392 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.392 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:14:00.392 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.392 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.392 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.959 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.959 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:14:00.959 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.959 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.959 06:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.218 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.218 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:14:01.218 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.218 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.218 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.477 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.477 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:14:01.477 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.477 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.477 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.737 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.737 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:14:01.737 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.737 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.737 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.737 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1027847 00:14:01.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1027847) - No such process 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1027847 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:01.996 rmmod nvme_tcp 00:14:01.996 rmmod nvme_fabrics 00:14:01.996 rmmod nvme_keyring 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1027824 ']' 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1027824 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1027824 ']' 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1027824 00:14:01.996 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:02.254 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.254 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1027824 00:14:02.254 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:02.254 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:02.254 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1027824' 00:14:02.254 killing process with pid 1027824 00:14:02.254 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1027824 00:14:02.254 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1027824 00:14:02.571 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:02.571 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:02.571 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:02.571 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:02.571 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:02.571 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:02.571 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:02.571 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:02.571 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:02.571 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.571 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.571 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.478 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:04.478 00:14:04.478 real 0m15.578s 00:14:04.478 user 0m38.426s 00:14:04.478 sys 0m6.236s 00:14:04.479 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.479 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.479 ************************************ 00:14:04.479 END TEST nvmf_connect_stress 00:14:04.479 ************************************ 00:14:04.479 06:17:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:04.479 06:17:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:04.479 06:17:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:04.479 06:17:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:04.479 ************************************ 00:14:04.479 START TEST nvmf_fused_ordering 00:14:04.479 ************************************ 00:14:04.479 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:04.479 * Looking for test storage... 00:14:04.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:04.479 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:04.479 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:14:04.479 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:04.739 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:04.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.740 --rc genhtml_branch_coverage=1 00:14:04.740 --rc genhtml_function_coverage=1 00:14:04.740 --rc genhtml_legend=1 00:14:04.740 --rc geninfo_all_blocks=1 00:14:04.740 --rc geninfo_unexecuted_blocks=1 00:14:04.740 00:14:04.740 ' 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:04.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.740 --rc genhtml_branch_coverage=1 00:14:04.740 --rc genhtml_function_coverage=1 00:14:04.740 --rc genhtml_legend=1 00:14:04.740 --rc geninfo_all_blocks=1 00:14:04.740 --rc geninfo_unexecuted_blocks=1 00:14:04.740 00:14:04.740 ' 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:04.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.740 --rc genhtml_branch_coverage=1 00:14:04.740 --rc genhtml_function_coverage=1 00:14:04.740 --rc genhtml_legend=1 00:14:04.740 --rc geninfo_all_blocks=1 00:14:04.740 --rc geninfo_unexecuted_blocks=1 00:14:04.740 00:14:04.740 ' 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:04.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.740 --rc genhtml_branch_coverage=1 00:14:04.740 --rc genhtml_function_coverage=1 00:14:04.740 --rc genhtml_legend=1 00:14:04.740 --rc geninfo_all_blocks=1 00:14:04.740 --rc geninfo_unexecuted_blocks=1 00:14:04.740 00:14:04.740 ' 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:04.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:04.740 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:06.642 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:06.642 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:06.642 Found net devices under 0000:84:00.0: cvl_0_0 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:06.642 Found net devices under 0000:84:00.1: cvl_0_1 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:06.642 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:06.901 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:06.901 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:06.901 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:06.901 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:06.901 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:06.901 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:06.901 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:06.901 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:06.901 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:06.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:14:06.901 00:14:06.901 --- 10.0.0.2 ping statistics --- 00:14:06.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.902 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:06.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:14:06.902 00:14:06.902 --- 10.0.0.1 ping statistics --- 00:14:06.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.902 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1031101 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1031101 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1031101 ']' 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:06.902 06:17:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:06.902 [2024-12-08 06:17:56.961159] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:14:06.902 [2024-12-08 06:17:56.961245] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.162 [2024-12-08 06:17:57.033531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.162 [2024-12-08 06:17:57.093346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.162 [2024-12-08 06:17:57.093397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.162 [2024-12-08 06:17:57.093412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.162 [2024-12-08 06:17:57.093422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.162 [2024-12-08 06:17:57.093432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.162 [2024-12-08 06:17:57.094138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.162 [2024-12-08 06:17:57.243174] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.162 [2024-12-08 06:17:57.259395] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.162 NULL1 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.162 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.422 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.422 06:17:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:07.422 [2024-12-08 06:17:57.303498] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:14:07.423 [2024-12-08 06:17:57.303533] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031162 ] 00:14:07.684 Attached to nqn.2016-06.io.spdk:cnode1 00:14:07.684 Namespace ID: 1 size: 1GB 00:14:07.684 fused_ordering(0) 00:14:07.684 fused_ordering(1) 00:14:07.684 fused_ordering(2) 00:14:07.684 fused_ordering(3) 00:14:07.684 fused_ordering(4) 00:14:07.684 fused_ordering(5) 00:14:07.684 fused_ordering(6) 00:14:07.684 fused_ordering(7) 00:14:07.684 fused_ordering(8) 00:14:07.684 fused_ordering(9) 00:14:07.684 fused_ordering(10) 00:14:07.684 fused_ordering(11) 00:14:07.684 fused_ordering(12) 00:14:07.684 fused_ordering(13) 00:14:07.684 fused_ordering(14) 00:14:07.684 fused_ordering(15) 00:14:07.684 fused_ordering(16) 00:14:07.684 fused_ordering(17) 00:14:07.684 fused_ordering(18) 00:14:07.684 fused_ordering(19) 00:14:07.684 fused_ordering(20) 00:14:07.684 fused_ordering(21) 00:14:07.684 fused_ordering(22) 00:14:07.684 fused_ordering(23) 00:14:07.684 fused_ordering(24) 00:14:07.684 fused_ordering(25) 00:14:07.684 fused_ordering(26) 00:14:07.684 fused_ordering(27) 00:14:07.684 fused_ordering(28) 00:14:07.684 fused_ordering(29) 00:14:07.684 fused_ordering(30) 00:14:07.684 fused_ordering(31) 00:14:07.684 fused_ordering(32) 00:14:07.684 fused_ordering(33) 00:14:07.684 fused_ordering(34) 00:14:07.684 fused_ordering(35) 00:14:07.684 fused_ordering(36) 00:14:07.684 fused_ordering(37) 00:14:07.684 fused_ordering(38) 00:14:07.684 fused_ordering(39) 00:14:07.684 fused_ordering(40) 00:14:07.684 fused_ordering(41) 00:14:07.684 fused_ordering(42) 00:14:07.684 fused_ordering(43) 00:14:07.684 fused_ordering(44) 00:14:07.684 fused_ordering(45) 00:14:07.684 fused_ordering(46) 00:14:07.684 fused_ordering(47) 00:14:07.684 fused_ordering(48) 00:14:07.684 fused_ordering(49) 00:14:07.684 fused_ordering(50) 00:14:07.684 fused_ordering(51) 00:14:07.684 fused_ordering(52) 00:14:07.684 fused_ordering(53) 00:14:07.684 fused_ordering(54) 00:14:07.684 fused_ordering(55) 00:14:07.684 fused_ordering(56) 00:14:07.684 fused_ordering(57) 00:14:07.684 fused_ordering(58) 00:14:07.684 fused_ordering(59) 00:14:07.684 fused_ordering(60) 00:14:07.684 fused_ordering(61) 00:14:07.684 fused_ordering(62) 00:14:07.684 fused_ordering(63) 00:14:07.684 fused_ordering(64) 00:14:07.684 fused_ordering(65) 00:14:07.684 fused_ordering(66) 00:14:07.684 fused_ordering(67) 00:14:07.684 fused_ordering(68) 00:14:07.684 fused_ordering(69) 00:14:07.684 fused_ordering(70) 00:14:07.684 fused_ordering(71) 00:14:07.684 fused_ordering(72) 00:14:07.684 fused_ordering(73) 00:14:07.684 fused_ordering(74) 00:14:07.684 fused_ordering(75) 00:14:07.684 fused_ordering(76) 00:14:07.684 fused_ordering(77) 00:14:07.684 fused_ordering(78) 00:14:07.684 fused_ordering(79) 00:14:07.684 fused_ordering(80) 00:14:07.684 fused_ordering(81) 00:14:07.684 fused_ordering(82) 00:14:07.684 fused_ordering(83) 00:14:07.684 fused_ordering(84) 00:14:07.684 fused_ordering(85) 00:14:07.684 fused_ordering(86) 00:14:07.684 fused_ordering(87) 00:14:07.684 fused_ordering(88) 00:14:07.684 fused_ordering(89) 00:14:07.684 fused_ordering(90) 00:14:07.684 fused_ordering(91) 00:14:07.684 fused_ordering(92) 00:14:07.684 fused_ordering(93) 00:14:07.684 fused_ordering(94) 00:14:07.684 fused_ordering(95) 00:14:07.684 fused_ordering(96) 00:14:07.684 fused_ordering(97) 00:14:07.684 fused_ordering(98) 00:14:07.684 fused_ordering(99) 00:14:07.684 fused_ordering(100) 00:14:07.684 fused_ordering(101) 00:14:07.684 fused_ordering(102) 00:14:07.684 fused_ordering(103) 00:14:07.684 fused_ordering(104) 00:14:07.684 fused_ordering(105) 00:14:07.684 fused_ordering(106) 00:14:07.684 fused_ordering(107) 00:14:07.684 fused_ordering(108) 00:14:07.684 fused_ordering(109) 00:14:07.684 fused_ordering(110) 00:14:07.684 fused_ordering(111) 00:14:07.684 fused_ordering(112) 00:14:07.684 fused_ordering(113) 00:14:07.684 fused_ordering(114) 00:14:07.684 fused_ordering(115) 00:14:07.684 fused_ordering(116) 00:14:07.684 fused_ordering(117) 00:14:07.684 fused_ordering(118) 00:14:07.684 fused_ordering(119) 00:14:07.684 fused_ordering(120) 00:14:07.684 fused_ordering(121) 00:14:07.684 fused_ordering(122) 00:14:07.684 fused_ordering(123) 00:14:07.684 fused_ordering(124) 00:14:07.684 fused_ordering(125) 00:14:07.684 fused_ordering(126) 00:14:07.684 fused_ordering(127) 00:14:07.684 fused_ordering(128) 00:14:07.684 fused_ordering(129) 00:14:07.684 fused_ordering(130) 00:14:07.684 fused_ordering(131) 00:14:07.684 fused_ordering(132) 00:14:07.684 fused_ordering(133) 00:14:07.684 fused_ordering(134) 00:14:07.684 fused_ordering(135) 00:14:07.684 fused_ordering(136) 00:14:07.684 fused_ordering(137) 00:14:07.684 fused_ordering(138) 00:14:07.684 fused_ordering(139) 00:14:07.684 fused_ordering(140) 00:14:07.684 fused_ordering(141) 00:14:07.684 fused_ordering(142) 00:14:07.684 fused_ordering(143) 00:14:07.684 fused_ordering(144) 00:14:07.684 fused_ordering(145) 00:14:07.684 fused_ordering(146) 00:14:07.684 fused_ordering(147) 00:14:07.684 fused_ordering(148) 00:14:07.684 fused_ordering(149) 00:14:07.684 fused_ordering(150) 00:14:07.684 fused_ordering(151) 00:14:07.684 fused_ordering(152) 00:14:07.684 fused_ordering(153) 00:14:07.684 fused_ordering(154) 00:14:07.684 fused_ordering(155) 00:14:07.684 fused_ordering(156) 00:14:07.684 fused_ordering(157) 00:14:07.684 fused_ordering(158) 00:14:07.684 fused_ordering(159) 00:14:07.684 fused_ordering(160) 00:14:07.684 fused_ordering(161) 00:14:07.684 fused_ordering(162) 00:14:07.684 fused_ordering(163) 00:14:07.684 fused_ordering(164) 00:14:07.684 fused_ordering(165) 00:14:07.684 fused_ordering(166) 00:14:07.684 fused_ordering(167) 00:14:07.684 fused_ordering(168) 00:14:07.684 fused_ordering(169) 00:14:07.684 fused_ordering(170) 00:14:07.684 fused_ordering(171) 00:14:07.684 fused_ordering(172) 00:14:07.684 fused_ordering(173) 00:14:07.684 fused_ordering(174) 00:14:07.684 fused_ordering(175) 00:14:07.684 fused_ordering(176) 00:14:07.684 fused_ordering(177) 00:14:07.684 fused_ordering(178) 00:14:07.684 fused_ordering(179) 00:14:07.684 fused_ordering(180) 00:14:07.684 fused_ordering(181) 00:14:07.684 fused_ordering(182) 00:14:07.684 fused_ordering(183) 00:14:07.684 fused_ordering(184) 00:14:07.684 fused_ordering(185) 00:14:07.684 fused_ordering(186) 00:14:07.684 fused_ordering(187) 00:14:07.684 fused_ordering(188) 00:14:07.684 fused_ordering(189) 00:14:07.684 fused_ordering(190) 00:14:07.684 fused_ordering(191) 00:14:07.684 fused_ordering(192) 00:14:07.684 fused_ordering(193) 00:14:07.684 fused_ordering(194) 00:14:07.684 fused_ordering(195) 00:14:07.684 fused_ordering(196) 00:14:07.684 fused_ordering(197) 00:14:07.684 fused_ordering(198) 00:14:07.684 fused_ordering(199) 00:14:07.684 fused_ordering(200) 00:14:07.684 fused_ordering(201) 00:14:07.684 fused_ordering(202) 00:14:07.684 fused_ordering(203) 00:14:07.684 fused_ordering(204) 00:14:07.684 fused_ordering(205) 00:14:08.254 fused_ordering(206) 00:14:08.254 fused_ordering(207) 00:14:08.254 fused_ordering(208) 00:14:08.254 fused_ordering(209) 00:14:08.254 fused_ordering(210) 00:14:08.254 fused_ordering(211) 00:14:08.254 fused_ordering(212) 00:14:08.254 fused_ordering(213) 00:14:08.254 fused_ordering(214) 00:14:08.254 fused_ordering(215) 00:14:08.254 fused_ordering(216) 00:14:08.254 fused_ordering(217) 00:14:08.254 fused_ordering(218) 00:14:08.254 fused_ordering(219) 00:14:08.254 fused_ordering(220) 00:14:08.254 fused_ordering(221) 00:14:08.254 fused_ordering(222) 00:14:08.254 fused_ordering(223) 00:14:08.254 fused_ordering(224) 00:14:08.254 fused_ordering(225) 00:14:08.254 fused_ordering(226) 00:14:08.254 fused_ordering(227) 00:14:08.254 fused_ordering(228) 00:14:08.254 fused_ordering(229) 00:14:08.254 fused_ordering(230) 00:14:08.254 fused_ordering(231) 00:14:08.254 fused_ordering(232) 00:14:08.254 fused_ordering(233) 00:14:08.254 fused_ordering(234) 00:14:08.254 fused_ordering(235) 00:14:08.254 fused_ordering(236) 00:14:08.254 fused_ordering(237) 00:14:08.254 fused_ordering(238) 00:14:08.254 fused_ordering(239) 00:14:08.254 fused_ordering(240) 00:14:08.254 fused_ordering(241) 00:14:08.254 fused_ordering(242) 00:14:08.254 fused_ordering(243) 00:14:08.254 fused_ordering(244) 00:14:08.254 fused_ordering(245) 00:14:08.254 fused_ordering(246) 00:14:08.254 fused_ordering(247) 00:14:08.254 fused_ordering(248) 00:14:08.254 fused_ordering(249) 00:14:08.254 fused_ordering(250) 00:14:08.254 fused_ordering(251) 00:14:08.254 fused_ordering(252) 00:14:08.254 fused_ordering(253) 00:14:08.254 fused_ordering(254) 00:14:08.254 fused_ordering(255) 00:14:08.254 fused_ordering(256) 00:14:08.254 fused_ordering(257) 00:14:08.254 fused_ordering(258) 00:14:08.254 fused_ordering(259) 00:14:08.254 fused_ordering(260) 00:14:08.254 fused_ordering(261) 00:14:08.254 fused_ordering(262) 00:14:08.254 fused_ordering(263) 00:14:08.254 fused_ordering(264) 00:14:08.254 fused_ordering(265) 00:14:08.254 fused_ordering(266) 00:14:08.254 fused_ordering(267) 00:14:08.254 fused_ordering(268) 00:14:08.254 fused_ordering(269) 00:14:08.254 fused_ordering(270) 00:14:08.254 fused_ordering(271) 00:14:08.254 fused_ordering(272) 00:14:08.254 fused_ordering(273) 00:14:08.254 fused_ordering(274) 00:14:08.254 fused_ordering(275) 00:14:08.254 fused_ordering(276) 00:14:08.254 fused_ordering(277) 00:14:08.254 fused_ordering(278) 00:14:08.254 fused_ordering(279) 00:14:08.254 fused_ordering(280) 00:14:08.254 fused_ordering(281) 00:14:08.254 fused_ordering(282) 00:14:08.254 fused_ordering(283) 00:14:08.254 fused_ordering(284) 00:14:08.254 fused_ordering(285) 00:14:08.254 fused_ordering(286) 00:14:08.254 fused_ordering(287) 00:14:08.254 fused_ordering(288) 00:14:08.254 fused_ordering(289) 00:14:08.254 fused_ordering(290) 00:14:08.254 fused_ordering(291) 00:14:08.254 fused_ordering(292) 00:14:08.255 fused_ordering(293) 00:14:08.255 fused_ordering(294) 00:14:08.255 fused_ordering(295) 00:14:08.255 fused_ordering(296) 00:14:08.255 fused_ordering(297) 00:14:08.255 fused_ordering(298) 00:14:08.255 fused_ordering(299) 00:14:08.255 fused_ordering(300) 00:14:08.255 fused_ordering(301) 00:14:08.255 fused_ordering(302) 00:14:08.255 fused_ordering(303) 00:14:08.255 fused_ordering(304) 00:14:08.255 fused_ordering(305) 00:14:08.255 fused_ordering(306) 00:14:08.255 fused_ordering(307) 00:14:08.255 fused_ordering(308) 00:14:08.255 fused_ordering(309) 00:14:08.255 fused_ordering(310) 00:14:08.255 fused_ordering(311) 00:14:08.255 fused_ordering(312) 00:14:08.255 fused_ordering(313) 00:14:08.255 fused_ordering(314) 00:14:08.255 fused_ordering(315) 00:14:08.255 fused_ordering(316) 00:14:08.255 fused_ordering(317) 00:14:08.255 fused_ordering(318) 00:14:08.255 fused_ordering(319) 00:14:08.255 fused_ordering(320) 00:14:08.255 fused_ordering(321) 00:14:08.255 fused_ordering(322) 00:14:08.255 fused_ordering(323) 00:14:08.255 fused_ordering(324) 00:14:08.255 fused_ordering(325) 00:14:08.255 fused_ordering(326) 00:14:08.255 fused_ordering(327) 00:14:08.255 fused_ordering(328) 00:14:08.255 fused_ordering(329) 00:14:08.255 fused_ordering(330) 00:14:08.255 fused_ordering(331) 00:14:08.255 fused_ordering(332) 00:14:08.255 fused_ordering(333) 00:14:08.255 fused_ordering(334) 00:14:08.255 fused_ordering(335) 00:14:08.255 fused_ordering(336) 00:14:08.255 fused_ordering(337) 00:14:08.255 fused_ordering(338) 00:14:08.255 fused_ordering(339) 00:14:08.255 fused_ordering(340) 00:14:08.255 fused_ordering(341) 00:14:08.255 fused_ordering(342) 00:14:08.255 fused_ordering(343) 00:14:08.255 fused_ordering(344) 00:14:08.255 fused_ordering(345) 00:14:08.255 fused_ordering(346) 00:14:08.255 fused_ordering(347) 00:14:08.255 fused_ordering(348) 00:14:08.255 fused_ordering(349) 00:14:08.255 fused_ordering(350) 00:14:08.255 fused_ordering(351) 00:14:08.255 fused_ordering(352) 00:14:08.255 fused_ordering(353) 00:14:08.255 fused_ordering(354) 00:14:08.255 fused_ordering(355) 00:14:08.255 fused_ordering(356) 00:14:08.255 fused_ordering(357) 00:14:08.255 fused_ordering(358) 00:14:08.255 fused_ordering(359) 00:14:08.255 fused_ordering(360) 00:14:08.255 fused_ordering(361) 00:14:08.255 fused_ordering(362) 00:14:08.255 fused_ordering(363) 00:14:08.255 fused_ordering(364) 00:14:08.255 fused_ordering(365) 00:14:08.255 fused_ordering(366) 00:14:08.255 fused_ordering(367) 00:14:08.255 fused_ordering(368) 00:14:08.255 fused_ordering(369) 00:14:08.255 fused_ordering(370) 00:14:08.255 fused_ordering(371) 00:14:08.255 fused_ordering(372) 00:14:08.255 fused_ordering(373) 00:14:08.255 fused_ordering(374) 00:14:08.255 fused_ordering(375) 00:14:08.255 fused_ordering(376) 00:14:08.255 fused_ordering(377) 00:14:08.255 fused_ordering(378) 00:14:08.255 fused_ordering(379) 00:14:08.255 fused_ordering(380) 00:14:08.255 fused_ordering(381) 00:14:08.255 fused_ordering(382) 00:14:08.255 fused_ordering(383) 00:14:08.255 fused_ordering(384) 00:14:08.255 fused_ordering(385) 00:14:08.255 fused_ordering(386) 00:14:08.255 fused_ordering(387) 00:14:08.255 fused_ordering(388) 00:14:08.255 fused_ordering(389) 00:14:08.255 fused_ordering(390) 00:14:08.255 fused_ordering(391) 00:14:08.255 fused_ordering(392) 00:14:08.255 fused_ordering(393) 00:14:08.255 fused_ordering(394) 00:14:08.255 fused_ordering(395) 00:14:08.255 fused_ordering(396) 00:14:08.255 fused_ordering(397) 00:14:08.255 fused_ordering(398) 00:14:08.255 fused_ordering(399) 00:14:08.255 fused_ordering(400) 00:14:08.255 fused_ordering(401) 00:14:08.255 fused_ordering(402) 00:14:08.255 fused_ordering(403) 00:14:08.255 fused_ordering(404) 00:14:08.255 fused_ordering(405) 00:14:08.255 fused_ordering(406) 00:14:08.255 fused_ordering(407) 00:14:08.255 fused_ordering(408) 00:14:08.255 fused_ordering(409) 00:14:08.255 fused_ordering(410) 00:14:08.517 fused_ordering(411) 00:14:08.517 fused_ordering(412) 00:14:08.517 fused_ordering(413) 00:14:08.517 fused_ordering(414) 00:14:08.517 fused_ordering(415) 00:14:08.517 fused_ordering(416) 00:14:08.517 fused_ordering(417) 00:14:08.517 fused_ordering(418) 00:14:08.517 fused_ordering(419) 00:14:08.517 fused_ordering(420) 00:14:08.517 fused_ordering(421) 00:14:08.517 fused_ordering(422) 00:14:08.517 fused_ordering(423) 00:14:08.517 fused_ordering(424) 00:14:08.517 fused_ordering(425) 00:14:08.517 fused_ordering(426) 00:14:08.517 fused_ordering(427) 00:14:08.517 fused_ordering(428) 00:14:08.517 fused_ordering(429) 00:14:08.517 fused_ordering(430) 00:14:08.517 fused_ordering(431) 00:14:08.517 fused_ordering(432) 00:14:08.517 fused_ordering(433) 00:14:08.517 fused_ordering(434) 00:14:08.517 fused_ordering(435) 00:14:08.517 fused_ordering(436) 00:14:08.517 fused_ordering(437) 00:14:08.517 fused_ordering(438) 00:14:08.517 fused_ordering(439) 00:14:08.517 fused_ordering(440) 00:14:08.517 fused_ordering(441) 00:14:08.517 fused_ordering(442) 00:14:08.517 fused_ordering(443) 00:14:08.517 fused_ordering(444) 00:14:08.517 fused_ordering(445) 00:14:08.517 fused_ordering(446) 00:14:08.517 fused_ordering(447) 00:14:08.517 fused_ordering(448) 00:14:08.517 fused_ordering(449) 00:14:08.517 fused_ordering(450) 00:14:08.517 fused_ordering(451) 00:14:08.517 fused_ordering(452) 00:14:08.517 fused_ordering(453) 00:14:08.517 fused_ordering(454) 00:14:08.517 fused_ordering(455) 00:14:08.517 fused_ordering(456) 00:14:08.517 fused_ordering(457) 00:14:08.517 fused_ordering(458) 00:14:08.517 fused_ordering(459) 00:14:08.517 fused_ordering(460) 00:14:08.517 fused_ordering(461) 00:14:08.517 fused_ordering(462) 00:14:08.517 fused_ordering(463) 00:14:08.517 fused_ordering(464) 00:14:08.517 fused_ordering(465) 00:14:08.517 fused_ordering(466) 00:14:08.517 fused_ordering(467) 00:14:08.517 fused_ordering(468) 00:14:08.517 fused_ordering(469) 00:14:08.517 fused_ordering(470) 00:14:08.517 fused_ordering(471) 00:14:08.517 fused_ordering(472) 00:14:08.517 fused_ordering(473) 00:14:08.517 fused_ordering(474) 00:14:08.517 fused_ordering(475) 00:14:08.517 fused_ordering(476) 00:14:08.517 fused_ordering(477) 00:14:08.517 fused_ordering(478) 00:14:08.517 fused_ordering(479) 00:14:08.517 fused_ordering(480) 00:14:08.517 fused_ordering(481) 00:14:08.517 fused_ordering(482) 00:14:08.517 fused_ordering(483) 00:14:08.517 fused_ordering(484) 00:14:08.517 fused_ordering(485) 00:14:08.517 fused_ordering(486) 00:14:08.517 fused_ordering(487) 00:14:08.517 fused_ordering(488) 00:14:08.517 fused_ordering(489) 00:14:08.517 fused_ordering(490) 00:14:08.517 fused_ordering(491) 00:14:08.517 fused_ordering(492) 00:14:08.517 fused_ordering(493) 00:14:08.517 fused_ordering(494) 00:14:08.517 fused_ordering(495) 00:14:08.517 fused_ordering(496) 00:14:08.517 fused_ordering(497) 00:14:08.517 fused_ordering(498) 00:14:08.517 fused_ordering(499) 00:14:08.517 fused_ordering(500) 00:14:08.517 fused_ordering(501) 00:14:08.517 fused_ordering(502) 00:14:08.517 fused_ordering(503) 00:14:08.517 fused_ordering(504) 00:14:08.517 fused_ordering(505) 00:14:08.517 fused_ordering(506) 00:14:08.517 fused_ordering(507) 00:14:08.517 fused_ordering(508) 00:14:08.517 fused_ordering(509) 00:14:08.517 fused_ordering(510) 00:14:08.517 fused_ordering(511) 00:14:08.517 fused_ordering(512) 00:14:08.517 fused_ordering(513) 00:14:08.517 fused_ordering(514) 00:14:08.517 fused_ordering(515) 00:14:08.517 fused_ordering(516) 00:14:08.517 fused_ordering(517) 00:14:08.517 fused_ordering(518) 00:14:08.517 fused_ordering(519) 00:14:08.517 fused_ordering(520) 00:14:08.517 fused_ordering(521) 00:14:08.517 fused_ordering(522) 00:14:08.517 fused_ordering(523) 00:14:08.517 fused_ordering(524) 00:14:08.517 fused_ordering(525) 00:14:08.517 fused_ordering(526) 00:14:08.517 fused_ordering(527) 00:14:08.517 fused_ordering(528) 00:14:08.517 fused_ordering(529) 00:14:08.517 fused_ordering(530) 00:14:08.517 fused_ordering(531) 00:14:08.517 fused_ordering(532) 00:14:08.517 fused_ordering(533) 00:14:08.517 fused_ordering(534) 00:14:08.517 fused_ordering(535) 00:14:08.517 fused_ordering(536) 00:14:08.517 fused_ordering(537) 00:14:08.517 fused_ordering(538) 00:14:08.517 fused_ordering(539) 00:14:08.517 fused_ordering(540) 00:14:08.517 fused_ordering(541) 00:14:08.517 fused_ordering(542) 00:14:08.517 fused_ordering(543) 00:14:08.517 fused_ordering(544) 00:14:08.517 fused_ordering(545) 00:14:08.517 fused_ordering(546) 00:14:08.517 fused_ordering(547) 00:14:08.517 fused_ordering(548) 00:14:08.517 fused_ordering(549) 00:14:08.517 fused_ordering(550) 00:14:08.517 fused_ordering(551) 00:14:08.517 fused_ordering(552) 00:14:08.517 fused_ordering(553) 00:14:08.517 fused_ordering(554) 00:14:08.517 fused_ordering(555) 00:14:08.517 fused_ordering(556) 00:14:08.517 fused_ordering(557) 00:14:08.517 fused_ordering(558) 00:14:08.517 fused_ordering(559) 00:14:08.517 fused_ordering(560) 00:14:08.517 fused_ordering(561) 00:14:08.517 fused_ordering(562) 00:14:08.517 fused_ordering(563) 00:14:08.517 fused_ordering(564) 00:14:08.517 fused_ordering(565) 00:14:08.517 fused_ordering(566) 00:14:08.517 fused_ordering(567) 00:14:08.517 fused_ordering(568) 00:14:08.517 fused_ordering(569) 00:14:08.517 fused_ordering(570) 00:14:08.517 fused_ordering(571) 00:14:08.517 fused_ordering(572) 00:14:08.517 fused_ordering(573) 00:14:08.517 fused_ordering(574) 00:14:08.517 fused_ordering(575) 00:14:08.517 fused_ordering(576) 00:14:08.517 fused_ordering(577) 00:14:08.517 fused_ordering(578) 00:14:08.517 fused_ordering(579) 00:14:08.517 fused_ordering(580) 00:14:08.517 fused_ordering(581) 00:14:08.517 fused_ordering(582) 00:14:08.517 fused_ordering(583) 00:14:08.517 fused_ordering(584) 00:14:08.517 fused_ordering(585) 00:14:08.517 fused_ordering(586) 00:14:08.517 fused_ordering(587) 00:14:08.517 fused_ordering(588) 00:14:08.517 fused_ordering(589) 00:14:08.517 fused_ordering(590) 00:14:08.517 fused_ordering(591) 00:14:08.517 fused_ordering(592) 00:14:08.517 fused_ordering(593) 00:14:08.517 fused_ordering(594) 00:14:08.517 fused_ordering(595) 00:14:08.517 fused_ordering(596) 00:14:08.517 fused_ordering(597) 00:14:08.517 fused_ordering(598) 00:14:08.517 fused_ordering(599) 00:14:08.517 fused_ordering(600) 00:14:08.517 fused_ordering(601) 00:14:08.517 fused_ordering(602) 00:14:08.517 fused_ordering(603) 00:14:08.517 fused_ordering(604) 00:14:08.517 fused_ordering(605) 00:14:08.517 fused_ordering(606) 00:14:08.517 fused_ordering(607) 00:14:08.517 fused_ordering(608) 00:14:08.517 fused_ordering(609) 00:14:08.517 fused_ordering(610) 00:14:08.517 fused_ordering(611) 00:14:08.517 fused_ordering(612) 00:14:08.517 fused_ordering(613) 00:14:08.517 fused_ordering(614) 00:14:08.517 fused_ordering(615) 00:14:09.087 fused_ordering(616) 00:14:09.087 fused_ordering(617) 00:14:09.087 fused_ordering(618) 00:14:09.087 fused_ordering(619) 00:14:09.087 fused_ordering(620) 00:14:09.087 fused_ordering(621) 00:14:09.088 fused_ordering(622) 00:14:09.088 fused_ordering(623) 00:14:09.088 fused_ordering(624) 00:14:09.088 fused_ordering(625) 00:14:09.088 fused_ordering(626) 00:14:09.088 fused_ordering(627) 00:14:09.088 fused_ordering(628) 00:14:09.088 fused_ordering(629) 00:14:09.088 fused_ordering(630) 00:14:09.088 fused_ordering(631) 00:14:09.088 fused_ordering(632) 00:14:09.088 fused_ordering(633) 00:14:09.088 fused_ordering(634) 00:14:09.088 fused_ordering(635) 00:14:09.088 fused_ordering(636) 00:14:09.088 fused_ordering(637) 00:14:09.088 fused_ordering(638) 00:14:09.088 fused_ordering(639) 00:14:09.088 fused_ordering(640) 00:14:09.088 fused_ordering(641) 00:14:09.088 fused_ordering(642) 00:14:09.088 fused_ordering(643) 00:14:09.088 fused_ordering(644) 00:14:09.088 fused_ordering(645) 00:14:09.088 fused_ordering(646) 00:14:09.088 fused_ordering(647) 00:14:09.088 fused_ordering(648) 00:14:09.088 fused_ordering(649) 00:14:09.088 fused_ordering(650) 00:14:09.088 fused_ordering(651) 00:14:09.088 fused_ordering(652) 00:14:09.088 fused_ordering(653) 00:14:09.088 fused_ordering(654) 00:14:09.088 fused_ordering(655) 00:14:09.088 fused_ordering(656) 00:14:09.088 fused_ordering(657) 00:14:09.088 fused_ordering(658) 00:14:09.088 fused_ordering(659) 00:14:09.088 fused_ordering(660) 00:14:09.088 fused_ordering(661) 00:14:09.088 fused_ordering(662) 00:14:09.088 fused_ordering(663) 00:14:09.088 fused_ordering(664) 00:14:09.088 fused_ordering(665) 00:14:09.088 fused_ordering(666) 00:14:09.088 fused_ordering(667) 00:14:09.088 fused_ordering(668) 00:14:09.088 fused_ordering(669) 00:14:09.088 fused_ordering(670) 00:14:09.088 fused_ordering(671) 00:14:09.088 fused_ordering(672) 00:14:09.088 fused_ordering(673) 00:14:09.088 fused_ordering(674) 00:14:09.088 fused_ordering(675) 00:14:09.088 fused_ordering(676) 00:14:09.088 fused_ordering(677) 00:14:09.088 fused_ordering(678) 00:14:09.088 fused_ordering(679) 00:14:09.088 fused_ordering(680) 00:14:09.088 fused_ordering(681) 00:14:09.088 fused_ordering(682) 00:14:09.088 fused_ordering(683) 00:14:09.088 fused_ordering(684) 00:14:09.088 fused_ordering(685) 00:14:09.088 fused_ordering(686) 00:14:09.088 fused_ordering(687) 00:14:09.088 fused_ordering(688) 00:14:09.088 fused_ordering(689) 00:14:09.088 fused_ordering(690) 00:14:09.088 fused_ordering(691) 00:14:09.088 fused_ordering(692) 00:14:09.088 fused_ordering(693) 00:14:09.088 fused_ordering(694) 00:14:09.088 fused_ordering(695) 00:14:09.088 fused_ordering(696) 00:14:09.088 fused_ordering(697) 00:14:09.088 fused_ordering(698) 00:14:09.088 fused_ordering(699) 00:14:09.088 fused_ordering(700) 00:14:09.088 fused_ordering(701) 00:14:09.088 fused_ordering(702) 00:14:09.088 fused_ordering(703) 00:14:09.088 fused_ordering(704) 00:14:09.088 fused_ordering(705) 00:14:09.088 fused_ordering(706) 00:14:09.088 fused_ordering(707) 00:14:09.088 fused_ordering(708) 00:14:09.088 fused_ordering(709) 00:14:09.088 fused_ordering(710) 00:14:09.088 fused_ordering(711) 00:14:09.088 fused_ordering(712) 00:14:09.088 fused_ordering(713) 00:14:09.088 fused_ordering(714) 00:14:09.088 fused_ordering(715) 00:14:09.088 fused_ordering(716) 00:14:09.088 fused_ordering(717) 00:14:09.088 fused_ordering(718) 00:14:09.088 fused_ordering(719) 00:14:09.088 fused_ordering(720) 00:14:09.088 fused_ordering(721) 00:14:09.088 fused_ordering(722) 00:14:09.088 fused_ordering(723) 00:14:09.088 fused_ordering(724) 00:14:09.088 fused_ordering(725) 00:14:09.088 fused_ordering(726) 00:14:09.088 fused_ordering(727) 00:14:09.088 fused_ordering(728) 00:14:09.088 fused_ordering(729) 00:14:09.088 fused_ordering(730) 00:14:09.088 fused_ordering(731) 00:14:09.088 fused_ordering(732) 00:14:09.088 fused_ordering(733) 00:14:09.088 fused_ordering(734) 00:14:09.088 fused_ordering(735) 00:14:09.088 fused_ordering(736) 00:14:09.088 fused_ordering(737) 00:14:09.088 fused_ordering(738) 00:14:09.088 fused_ordering(739) 00:14:09.088 fused_ordering(740) 00:14:09.088 fused_ordering(741) 00:14:09.088 fused_ordering(742) 00:14:09.088 fused_ordering(743) 00:14:09.088 fused_ordering(744) 00:14:09.088 fused_ordering(745) 00:14:09.088 fused_ordering(746) 00:14:09.088 fused_ordering(747) 00:14:09.088 fused_ordering(748) 00:14:09.088 fused_ordering(749) 00:14:09.088 fused_ordering(750) 00:14:09.088 fused_ordering(751) 00:14:09.088 fused_ordering(752) 00:14:09.088 fused_ordering(753) 00:14:09.088 fused_ordering(754) 00:14:09.088 fused_ordering(755) 00:14:09.088 fused_ordering(756) 00:14:09.088 fused_ordering(757) 00:14:09.088 fused_ordering(758) 00:14:09.088 fused_ordering(759) 00:14:09.088 fused_ordering(760) 00:14:09.088 fused_ordering(761) 00:14:09.088 fused_ordering(762) 00:14:09.088 fused_ordering(763) 00:14:09.088 fused_ordering(764) 00:14:09.088 fused_ordering(765) 00:14:09.088 fused_ordering(766) 00:14:09.088 fused_ordering(767) 00:14:09.088 fused_ordering(768) 00:14:09.088 fused_ordering(769) 00:14:09.088 fused_ordering(770) 00:14:09.088 fused_ordering(771) 00:14:09.088 fused_ordering(772) 00:14:09.088 fused_ordering(773) 00:14:09.088 fused_ordering(774) 00:14:09.088 fused_ordering(775) 00:14:09.088 fused_ordering(776) 00:14:09.088 fused_ordering(777) 00:14:09.088 fused_ordering(778) 00:14:09.088 fused_ordering(779) 00:14:09.088 fused_ordering(780) 00:14:09.088 fused_ordering(781) 00:14:09.088 fused_ordering(782) 00:14:09.088 fused_ordering(783) 00:14:09.088 fused_ordering(784) 00:14:09.088 fused_ordering(785) 00:14:09.088 fused_ordering(786) 00:14:09.088 fused_ordering(787) 00:14:09.088 fused_ordering(788) 00:14:09.088 fused_ordering(789) 00:14:09.088 fused_ordering(790) 00:14:09.088 fused_ordering(791) 00:14:09.088 fused_ordering(792) 00:14:09.088 fused_ordering(793) 00:14:09.088 fused_ordering(794) 00:14:09.088 fused_ordering(795) 00:14:09.088 fused_ordering(796) 00:14:09.088 fused_ordering(797) 00:14:09.088 fused_ordering(798) 00:14:09.088 fused_ordering(799) 00:14:09.088 fused_ordering(800) 00:14:09.088 fused_ordering(801) 00:14:09.088 fused_ordering(802) 00:14:09.088 fused_ordering(803) 00:14:09.088 fused_ordering(804) 00:14:09.088 fused_ordering(805) 00:14:09.088 fused_ordering(806) 00:14:09.088 fused_ordering(807) 00:14:09.088 fused_ordering(808) 00:14:09.088 fused_ordering(809) 00:14:09.088 fused_ordering(810) 00:14:09.088 fused_ordering(811) 00:14:09.088 fused_ordering(812) 00:14:09.088 fused_ordering(813) 00:14:09.088 fused_ordering(814) 00:14:09.088 fused_ordering(815) 00:14:09.088 fused_ordering(816) 00:14:09.088 fused_ordering(817) 00:14:09.088 fused_ordering(818) 00:14:09.088 fused_ordering(819) 00:14:09.088 fused_ordering(820) 00:14:09.657 fused_ordering(821) 00:14:09.657 fused_ordering(822) 00:14:09.657 fused_ordering(823) 00:14:09.657 fused_ordering(824) 00:14:09.657 fused_ordering(825) 00:14:09.657 fused_ordering(826) 00:14:09.657 fused_ordering(827) 00:14:09.657 fused_ordering(828) 00:14:09.657 fused_ordering(829) 00:14:09.657 fused_ordering(830) 00:14:09.657 fused_ordering(831) 00:14:09.657 fused_ordering(832) 00:14:09.657 fused_ordering(833) 00:14:09.657 fused_ordering(834) 00:14:09.657 fused_ordering(835) 00:14:09.657 fused_ordering(836) 00:14:09.657 fused_ordering(837) 00:14:09.657 fused_ordering(838) 00:14:09.657 fused_ordering(839) 00:14:09.657 fused_ordering(840) 00:14:09.657 fused_ordering(841) 00:14:09.657 fused_ordering(842) 00:14:09.657 fused_ordering(843) 00:14:09.657 fused_ordering(844) 00:14:09.657 fused_ordering(845) 00:14:09.657 fused_ordering(846) 00:14:09.657 fused_ordering(847) 00:14:09.657 fused_ordering(848) 00:14:09.657 fused_ordering(849) 00:14:09.657 fused_ordering(850) 00:14:09.657 fused_ordering(851) 00:14:09.657 fused_ordering(852) 00:14:09.657 fused_ordering(853) 00:14:09.657 fused_ordering(854) 00:14:09.657 fused_ordering(855) 00:14:09.657 fused_ordering(856) 00:14:09.657 fused_ordering(857) 00:14:09.657 fused_ordering(858) 00:14:09.657 fused_ordering(859) 00:14:09.657 fused_ordering(860) 00:14:09.657 fused_ordering(861) 00:14:09.657 fused_ordering(862) 00:14:09.657 fused_ordering(863) 00:14:09.657 fused_ordering(864) 00:14:09.657 fused_ordering(865) 00:14:09.657 fused_ordering(866) 00:14:09.657 fused_ordering(867) 00:14:09.657 fused_ordering(868) 00:14:09.657 fused_ordering(869) 00:14:09.657 fused_ordering(870) 00:14:09.657 fused_ordering(871) 00:14:09.657 fused_ordering(872) 00:14:09.657 fused_ordering(873) 00:14:09.657 fused_ordering(874) 00:14:09.657 fused_ordering(875) 00:14:09.657 fused_ordering(876) 00:14:09.657 fused_ordering(877) 00:14:09.657 fused_ordering(878) 00:14:09.657 fused_ordering(879) 00:14:09.657 fused_ordering(880) 00:14:09.657 fused_ordering(881) 00:14:09.657 fused_ordering(882) 00:14:09.657 fused_ordering(883) 00:14:09.657 fused_ordering(884) 00:14:09.657 fused_ordering(885) 00:14:09.657 fused_ordering(886) 00:14:09.657 fused_ordering(887) 00:14:09.657 fused_ordering(888) 00:14:09.657 fused_ordering(889) 00:14:09.657 fused_ordering(890) 00:14:09.657 fused_ordering(891) 00:14:09.657 fused_ordering(892) 00:14:09.657 fused_ordering(893) 00:14:09.657 fused_ordering(894) 00:14:09.657 fused_ordering(895) 00:14:09.657 fused_ordering(896) 00:14:09.658 fused_ordering(897) 00:14:09.658 fused_ordering(898) 00:14:09.658 fused_ordering(899) 00:14:09.658 fused_ordering(900) 00:14:09.658 fused_ordering(901) 00:14:09.658 fused_ordering(902) 00:14:09.658 fused_ordering(903) 00:14:09.658 fused_ordering(904) 00:14:09.658 fused_ordering(905) 00:14:09.658 fused_ordering(906) 00:14:09.658 fused_ordering(907) 00:14:09.658 fused_ordering(908) 00:14:09.658 fused_ordering(909) 00:14:09.658 fused_ordering(910) 00:14:09.658 fused_ordering(911) 00:14:09.658 fused_ordering(912) 00:14:09.658 fused_ordering(913) 00:14:09.658 fused_ordering(914) 00:14:09.658 fused_ordering(915) 00:14:09.658 fused_ordering(916) 00:14:09.658 fused_ordering(917) 00:14:09.658 fused_ordering(918) 00:14:09.658 fused_ordering(919) 00:14:09.658 fused_ordering(920) 00:14:09.658 fused_ordering(921) 00:14:09.658 fused_ordering(922) 00:14:09.658 fused_ordering(923) 00:14:09.658 fused_ordering(924) 00:14:09.658 fused_ordering(925) 00:14:09.658 fused_ordering(926) 00:14:09.658 fused_ordering(927) 00:14:09.658 fused_ordering(928) 00:14:09.658 fused_ordering(929) 00:14:09.658 fused_ordering(930) 00:14:09.658 fused_ordering(931) 00:14:09.658 fused_ordering(932) 00:14:09.658 fused_ordering(933) 00:14:09.658 fused_ordering(934) 00:14:09.658 fused_ordering(935) 00:14:09.658 fused_ordering(936) 00:14:09.658 fused_ordering(937) 00:14:09.658 fused_ordering(938) 00:14:09.658 fused_ordering(939) 00:14:09.658 fused_ordering(940) 00:14:09.658 fused_ordering(941) 00:14:09.658 fused_ordering(942) 00:14:09.658 fused_ordering(943) 00:14:09.658 fused_ordering(944) 00:14:09.658 fused_ordering(945) 00:14:09.658 fused_ordering(946) 00:14:09.658 fused_ordering(947) 00:14:09.658 fused_ordering(948) 00:14:09.658 fused_ordering(949) 00:14:09.658 fused_ordering(950) 00:14:09.658 fused_ordering(951) 00:14:09.658 fused_ordering(952) 00:14:09.658 fused_ordering(953) 00:14:09.658 fused_ordering(954) 00:14:09.658 fused_ordering(955) 00:14:09.658 fused_ordering(956) 00:14:09.658 fused_ordering(957) 00:14:09.658 fused_ordering(958) 00:14:09.658 fused_ordering(959) 00:14:09.658 fused_ordering(960) 00:14:09.658 fused_ordering(961) 00:14:09.658 fused_ordering(962) 00:14:09.658 fused_ordering(963) 00:14:09.658 fused_ordering(964) 00:14:09.658 fused_ordering(965) 00:14:09.658 fused_ordering(966) 00:14:09.658 fused_ordering(967) 00:14:09.658 fused_ordering(968) 00:14:09.658 fused_ordering(969) 00:14:09.658 fused_ordering(970) 00:14:09.658 fused_ordering(971) 00:14:09.658 fused_ordering(972) 00:14:09.658 fused_ordering(973) 00:14:09.658 fused_ordering(974) 00:14:09.658 fused_ordering(975) 00:14:09.658 fused_ordering(976) 00:14:09.658 fused_ordering(977) 00:14:09.658 fused_ordering(978) 00:14:09.658 fused_ordering(979) 00:14:09.658 fused_ordering(980) 00:14:09.658 fused_ordering(981) 00:14:09.658 fused_ordering(982) 00:14:09.658 fused_ordering(983) 00:14:09.658 fused_ordering(984) 00:14:09.658 fused_ordering(985) 00:14:09.658 fused_ordering(986) 00:14:09.658 fused_ordering(987) 00:14:09.658 fused_ordering(988) 00:14:09.658 fused_ordering(989) 00:14:09.658 fused_ordering(990) 00:14:09.658 fused_ordering(991) 00:14:09.658 fused_ordering(992) 00:14:09.658 fused_ordering(993) 00:14:09.658 fused_ordering(994) 00:14:09.658 fused_ordering(995) 00:14:09.658 fused_ordering(996) 00:14:09.658 fused_ordering(997) 00:14:09.658 fused_ordering(998) 00:14:09.658 fused_ordering(999) 00:14:09.658 fused_ordering(1000) 00:14:09.658 fused_ordering(1001) 00:14:09.658 fused_ordering(1002) 00:14:09.658 fused_ordering(1003) 00:14:09.658 fused_ordering(1004) 00:14:09.658 fused_ordering(1005) 00:14:09.658 fused_ordering(1006) 00:14:09.658 fused_ordering(1007) 00:14:09.658 fused_ordering(1008) 00:14:09.658 fused_ordering(1009) 00:14:09.658 fused_ordering(1010) 00:14:09.658 fused_ordering(1011) 00:14:09.658 fused_ordering(1012) 00:14:09.658 fused_ordering(1013) 00:14:09.658 fused_ordering(1014) 00:14:09.658 fused_ordering(1015) 00:14:09.658 fused_ordering(1016) 00:14:09.658 fused_ordering(1017) 00:14:09.658 fused_ordering(1018) 00:14:09.658 fused_ordering(1019) 00:14:09.658 fused_ordering(1020) 00:14:09.658 fused_ordering(1021) 00:14:09.658 fused_ordering(1022) 00:14:09.658 fused_ordering(1023) 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:09.658 rmmod nvme_tcp 00:14:09.658 rmmod nvme_fabrics 00:14:09.658 rmmod nvme_keyring 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1031101 ']' 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1031101 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1031101 ']' 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1031101 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1031101 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1031101' 00:14:09.658 killing process with pid 1031101 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1031101 00:14:09.658 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1031101 00:14:09.939 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:09.939 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:09.939 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:09.939 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:09.939 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:09.939 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:09.939 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:09.939 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:09.939 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:09.939 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.939 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.939 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:12.494 00:14:12.494 real 0m7.543s 00:14:12.494 user 0m4.757s 00:14:12.494 sys 0m3.514s 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:12.494 ************************************ 00:14:12.494 END TEST nvmf_fused_ordering 00:14:12.494 ************************************ 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:12.494 ************************************ 00:14:12.494 START TEST nvmf_ns_masking 00:14:12.494 ************************************ 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:12.494 * Looking for test storage... 00:14:12.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:12.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.494 --rc genhtml_branch_coverage=1 00:14:12.494 --rc genhtml_function_coverage=1 00:14:12.494 --rc genhtml_legend=1 00:14:12.494 --rc geninfo_all_blocks=1 00:14:12.494 --rc geninfo_unexecuted_blocks=1 00:14:12.494 00:14:12.494 ' 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:12.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.494 --rc genhtml_branch_coverage=1 00:14:12.494 --rc genhtml_function_coverage=1 00:14:12.494 --rc genhtml_legend=1 00:14:12.494 --rc geninfo_all_blocks=1 00:14:12.494 --rc geninfo_unexecuted_blocks=1 00:14:12.494 00:14:12.494 ' 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:12.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.494 --rc genhtml_branch_coverage=1 00:14:12.494 --rc genhtml_function_coverage=1 00:14:12.494 --rc genhtml_legend=1 00:14:12.494 --rc geninfo_all_blocks=1 00:14:12.494 --rc geninfo_unexecuted_blocks=1 00:14:12.494 00:14:12.494 ' 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:12.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.494 --rc genhtml_branch_coverage=1 00:14:12.494 --rc genhtml_function_coverage=1 00:14:12.494 --rc genhtml_legend=1 00:14:12.494 --rc geninfo_all_blocks=1 00:14:12.494 --rc geninfo_unexecuted_blocks=1 00:14:12.494 00:14:12.494 ' 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.494 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:12.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d0d4f282-c565-40b2-af4b-706a9600d800 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=5268024f-86f0-4fe3-bd25-4afa2ea28a12 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=7670547a-e622-4133-b036-ca2bb6fdfa83 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:12.495 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:14.402 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:14.402 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:14.402 Found net devices under 0000:84:00.0: cvl_0_0 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:14.402 Found net devices under 0000:84:00.1: cvl_0_1 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.402 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:14.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:14:14.403 00:14:14.403 --- 10.0.0.2 ping statistics --- 00:14:14.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.403 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:14:14.403 00:14:14.403 --- 10.0.0.1 ping statistics --- 00:14:14.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.403 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:14.403 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:14.660 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:14.660 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:14.660 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:14.660 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:14.660 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1033506 00:14:14.660 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:14.660 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1033506 00:14:14.660 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1033506 ']' 00:14:14.660 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.660 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.660 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.660 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.660 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:14.660 [2024-12-08 06:18:04.595975] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:14:14.660 [2024-12-08 06:18:04.596069] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.660 [2024-12-08 06:18:04.667179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.660 [2024-12-08 06:18:04.722155] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.660 [2024-12-08 06:18:04.722221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.660 [2024-12-08 06:18:04.722259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.660 [2024-12-08 06:18:04.722270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.660 [2024-12-08 06:18:04.722280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.660 [2024-12-08 06:18:04.722942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.916 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.917 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:14.917 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:14.917 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:14.917 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:14.917 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.917 06:18:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:15.173 [2024-12-08 06:18:05.131327] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.173 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:15.173 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:15.173 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:15.429 Malloc1 00:14:15.429 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:15.699 Malloc2 00:14:15.699 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:15.957 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:16.523 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.523 [2024-12-08 06:18:06.624074] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.783 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:16.783 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7670547a-e622-4133-b036-ca2bb6fdfa83 -a 10.0.0.2 -s 4420 -i 4 00:14:16.783 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:16.783 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:16.783 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.783 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:16.783 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:19.313 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:19.313 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:19.313 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:19.313 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:19.313 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:19.313 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:19.313 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:19.313 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:19.313 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:19.313 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:19.313 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:19.313 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.313 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:19.313 [ 0]:0x1 00:14:19.313 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:19.313 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.313 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5e1d4a2bac594b7ab200e6db911554ae 00:14:19.313 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5e1d4a2bac594b7ab200e6db911554ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.313 06:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:19.313 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:19.313 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.313 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:19.313 [ 0]:0x1 00:14:19.313 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:19.313 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.313 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5e1d4a2bac594b7ab200e6db911554ae 00:14:19.313 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5e1d4a2bac594b7ab200e6db911554ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.313 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:19.313 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.313 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:19.313 [ 1]:0x2 00:14:19.313 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:19.313 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.313 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=29f84befb1f04fc7b62e07282ba6bb3c 00:14:19.313 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 29f84befb1f04fc7b62e07282ba6bb3c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.313 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:19.314 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:19.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.314 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.571 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:19.830 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:19.830 06:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7670547a-e622-4133-b036-ca2bb6fdfa83 -a 10.0.0.2 -s 4420 -i 4 00:14:20.090 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:20.090 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:20.090 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.090 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:20.090 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:20.090 06:18:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:21.998 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:21.998 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:21.998 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.998 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:21.998 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.998 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:21.998 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:21.998 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:22.257 [ 0]:0x2 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=29f84befb1f04fc7b62e07282ba6bb3c 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 29f84befb1f04fc7b62e07282ba6bb3c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.257 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:22.823 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:22.823 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.823 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.823 [ 0]:0x1 00:14:22.823 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.823 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.823 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5e1d4a2bac594b7ab200e6db911554ae 00:14:22.823 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5e1d4a2bac594b7ab200e6db911554ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.823 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:22.823 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.823 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:22.823 [ 1]:0x2 00:14:22.823 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.823 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.823 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=29f84befb1f04fc7b62e07282ba6bb3c 00:14:22.824 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 29f84befb1f04fc7b62e07282ba6bb3c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.824 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:23.081 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:23.081 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:23.081 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:23.081 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:23.081 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.081 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:23.081 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.081 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:23.081 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.081 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:23.081 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:23.081 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.081 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:23.081 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.081 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:23.081 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:23.082 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:23.082 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:23.082 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:23.082 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.082 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:23.082 [ 0]:0x2 00:14:23.082 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:23.082 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.082 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=29f84befb1f04fc7b62e07282ba6bb3c 00:14:23.082 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 29f84befb1f04fc7b62e07282ba6bb3c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.082 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:23.082 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.340 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:23.600 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:23.600 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7670547a-e622-4133-b036-ca2bb6fdfa83 -a 10.0.0.2 -s 4420 -i 4 00:14:23.600 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:23.600 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:23.600 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.600 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:23.600 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:23.600 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:25.503 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:25.503 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:25.503 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.503 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:25.503 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.503 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:25.503 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:25.503 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:25.760 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:25.760 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:25.760 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:25.760 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.760 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:25.760 [ 0]:0x1 00:14:25.760 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:25.760 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.760 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5e1d4a2bac594b7ab200e6db911554ae 00:14:25.760 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5e1d4a2bac594b7ab200e6db911554ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.760 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:25.760 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.760 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:26.017 [ 1]:0x2 00:14:26.017 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.017 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.017 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=29f84befb1f04fc7b62e07282ba6bb3c 00:14:26.017 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 29f84befb1f04fc7b62e07282ba6bb3c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.017 06:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:26.277 [ 0]:0x2 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=29f84befb1f04fc7b62e07282ba6bb3c 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 29f84befb1f04fc7b62e07282ba6bb3c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:26.277 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:26.536 [2024-12-08 06:18:16.554402] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:26.536 request: 00:14:26.536 { 00:14:26.536 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.536 "nsid": 2, 00:14:26.536 "host": "nqn.2016-06.io.spdk:host1", 00:14:26.536 "method": "nvmf_ns_remove_host", 00:14:26.536 "req_id": 1 00:14:26.536 } 00:14:26.536 Got JSON-RPC error response 00:14:26.536 response: 00:14:26.536 { 00:14:26.536 "code": -32602, 00:14:26.536 "message": "Invalid parameters" 00:14:26.536 } 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:26.536 [ 0]:0x2 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.536 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.793 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=29f84befb1f04fc7b62e07282ba6bb3c 00:14:26.793 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 29f84befb1f04fc7b62e07282ba6bb3c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.793 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:26.793 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:26.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.793 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1035650 00:14:26.793 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:26.793 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.793 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1035650 /var/tmp/host.sock 00:14:26.793 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1035650 ']' 00:14:26.793 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:26.793 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.793 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:26.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:26.793 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.793 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:26.793 [2024-12-08 06:18:16.756031] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:14:26.793 [2024-12-08 06:18:16.756112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035650 ] 00:14:26.793 [2024-12-08 06:18:16.822165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.793 [2024-12-08 06:18:16.878650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.050 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.050 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:27.050 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.307 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:27.565 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d0d4f282-c565-40b2-af4b-706a9600d800 00:14:27.565 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:27.826 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D0D4F282C56540B2AF4B706A9600D800 -i 00:14:28.156 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 5268024f-86f0-4fe3-bd25-4afa2ea28a12 00:14:28.156 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:28.156 06:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 5268024F86F04FE3BD254AFA2EA28A12 -i 00:14:28.156 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:28.413 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:28.669 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:28.669 06:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:29.235 nvme0n1 00:14:29.235 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:29.235 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:29.492 nvme1n2 00:14:29.492 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:29.492 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:29.492 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:29.492 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:29.492 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:29.749 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:29.749 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:29.749 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:29.749 06:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:30.006 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d0d4f282-c565-40b2-af4b-706a9600d800 == \d\0\d\4\f\2\8\2\-\c\5\6\5\-\4\0\b\2\-\a\f\4\b\-\7\0\6\a\9\6\0\0\d\8\0\0 ]] 00:14:30.006 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:30.006 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:30.006 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:30.574 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 5268024f-86f0-4fe3-bd25-4afa2ea28a12 == \5\2\6\8\0\2\4\f\-\8\6\f\0\-\4\f\e\3\-\b\d\2\5\-\4\a\f\a\2\e\a\2\8\a\1\2 ]] 00:14:30.574 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.574 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:31.138 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid d0d4f282-c565-40b2-af4b-706a9600d800 00:14:31.138 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:31.138 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D0D4F282C56540B2AF4B706A9600D800 00:14:31.138 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:31.138 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D0D4F282C56540B2AF4B706A9600D800 00:14:31.138 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.138 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.138 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.138 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.138 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.138 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.138 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.138 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:31.138 06:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D0D4F282C56540B2AF4B706A9600D800 00:14:31.138 [2024-12-08 06:18:21.231752] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:31.138 [2024-12-08 06:18:21.231796] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:31.138 [2024-12-08 06:18:21.231822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.138 request: 00:14:31.138 { 00:14:31.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.138 "namespace": { 00:14:31.138 "bdev_name": "invalid", 00:14:31.138 "nsid": 1, 00:14:31.138 "nguid": "D0D4F282C56540B2AF4B706A9600D800", 00:14:31.138 "no_auto_visible": false, 00:14:31.138 "hide_metadata": false 00:14:31.138 }, 00:14:31.138 "method": "nvmf_subsystem_add_ns", 00:14:31.138 "req_id": 1 00:14:31.138 } 00:14:31.138 Got JSON-RPC error response 00:14:31.138 response: 00:14:31.138 { 00:14:31.138 "code": -32602, 00:14:31.139 "message": "Invalid parameters" 00:14:31.139 } 00:14:31.139 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:31.139 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:31.139 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:31.139 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:31.139 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid d0d4f282-c565-40b2-af4b-706a9600d800 00:14:31.139 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:31.414 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D0D4F282C56540B2AF4B706A9600D800 -i 00:14:31.673 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:33.581 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:33.581 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:33.581 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:33.840 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:33.840 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1035650 00:14:33.840 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1035650 ']' 00:14:33.840 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1035650 00:14:33.840 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:33.840 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.840 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1035650 00:14:33.840 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:33.840 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:33.840 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1035650' 00:14:33.840 killing process with pid 1035650 00:14:33.840 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1035650 00:14:33.840 06:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1035650 00:14:34.408 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.668 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:34.668 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:34.668 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:34.668 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:34.668 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:34.668 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:34.668 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:34.668 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:34.668 rmmod nvme_tcp 00:14:34.668 rmmod nvme_fabrics 00:14:34.668 rmmod nvme_keyring 00:14:34.669 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:34.669 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:34.669 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:34.669 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1033506 ']' 00:14:34.669 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1033506 00:14:34.669 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1033506 ']' 00:14:34.669 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1033506 00:14:34.669 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:34.669 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.669 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1033506 00:14:34.669 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.669 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.669 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1033506' 00:14:34.669 killing process with pid 1033506 00:14:34.669 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1033506 00:14:34.669 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1033506 00:14:34.930 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:34.930 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:34.930 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:34.930 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:34.930 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:34.930 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:34.930 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:34.930 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:34.930 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:34.930 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.930 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.930 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:37.484 00:14:37.484 real 0m24.947s 00:14:37.484 user 0m36.165s 00:14:37.484 sys 0m4.737s 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:37.484 ************************************ 00:14:37.484 END TEST nvmf_ns_masking 00:14:37.484 ************************************ 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:37.484 ************************************ 00:14:37.484 START TEST nvmf_nvme_cli 00:14:37.484 ************************************ 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:37.484 * Looking for test storage... 00:14:37.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:37.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.484 --rc genhtml_branch_coverage=1 00:14:37.484 --rc genhtml_function_coverage=1 00:14:37.484 --rc genhtml_legend=1 00:14:37.484 --rc geninfo_all_blocks=1 00:14:37.484 --rc geninfo_unexecuted_blocks=1 00:14:37.484 00:14:37.484 ' 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:37.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.484 --rc genhtml_branch_coverage=1 00:14:37.484 --rc genhtml_function_coverage=1 00:14:37.484 --rc genhtml_legend=1 00:14:37.484 --rc geninfo_all_blocks=1 00:14:37.484 --rc geninfo_unexecuted_blocks=1 00:14:37.484 00:14:37.484 ' 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:37.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.484 --rc genhtml_branch_coverage=1 00:14:37.484 --rc genhtml_function_coverage=1 00:14:37.484 --rc genhtml_legend=1 00:14:37.484 --rc geninfo_all_blocks=1 00:14:37.484 --rc geninfo_unexecuted_blocks=1 00:14:37.484 00:14:37.484 ' 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:37.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.484 --rc genhtml_branch_coverage=1 00:14:37.484 --rc genhtml_function_coverage=1 00:14:37.484 --rc genhtml_legend=1 00:14:37.484 --rc geninfo_all_blocks=1 00:14:37.484 --rc geninfo_unexecuted_blocks=1 00:14:37.484 00:14:37.484 ' 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.484 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:37.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:37.485 06:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:39.390 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:39.390 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:39.390 Found net devices under 0000:84:00.0: cvl_0_0 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:39.390 Found net devices under 0000:84:00.1: cvl_0_1 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:39.390 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:39.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:39.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:14:39.391 00:14:39.391 --- 10.0.0.2 ping statistics --- 00:14:39.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.391 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:39.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:39.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:14:39.391 00:14:39.391 --- 10.0.0.1 ping statistics --- 00:14:39.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.391 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1038574 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1038574 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1038574 ']' 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:39.391 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.650 [2024-12-08 06:18:29.538434] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:14:39.650 [2024-12-08 06:18:29.538529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.650 [2024-12-08 06:18:29.612477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:39.650 [2024-12-08 06:18:29.671291] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.650 [2024-12-08 06:18:29.671359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.650 [2024-12-08 06:18:29.671372] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.650 [2024-12-08 06:18:29.671384] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.650 [2024-12-08 06:18:29.671393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.650 [2024-12-08 06:18:29.673113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.650 [2024-12-08 06:18:29.673179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.650 [2024-12-08 06:18:29.673244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:39.650 [2024-12-08 06:18:29.673247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.911 [2024-12-08 06:18:29.813960] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.911 Malloc0 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.911 Malloc1 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.911 [2024-12-08 06:18:29.917045] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.911 06:18:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:14:40.171 00:14:40.171 Discovery Log Number of Records 2, Generation counter 2 00:14:40.171 =====Discovery Log Entry 0====== 00:14:40.171 trtype: tcp 00:14:40.171 adrfam: ipv4 00:14:40.171 subtype: current discovery subsystem 00:14:40.171 treq: not required 00:14:40.171 portid: 0 00:14:40.171 trsvcid: 4420 00:14:40.171 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:40.171 traddr: 10.0.0.2 00:14:40.171 eflags: explicit discovery connections, duplicate discovery information 00:14:40.171 sectype: none 00:14:40.171 =====Discovery Log Entry 1====== 00:14:40.171 trtype: tcp 00:14:40.171 adrfam: ipv4 00:14:40.171 subtype: nvme subsystem 00:14:40.171 treq: not required 00:14:40.171 portid: 0 00:14:40.171 trsvcid: 4420 00:14:40.171 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:40.171 traddr: 10.0.0.2 00:14:40.171 eflags: none 00:14:40.171 sectype: none 00:14:40.171 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:40.171 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:40.171 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:40.171 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.171 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:40.171 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:40.171 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.171 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:40.171 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.171 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:40.171 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:40.742 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:40.742 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:40.742 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:40.742 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:40.742 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:40.742 06:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:42.647 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:42.647 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:42.647 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.647 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:42.647 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.647 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:42.647 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:42.647 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:42.647 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.647 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:42.907 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:42.907 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.907 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:42.907 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.907 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:42.907 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:42.907 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.907 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:42.907 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:42.907 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.907 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:42.907 /dev/nvme0n2 ]] 00:14:42.907 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:42.907 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:42.907 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:42.907 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.907 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:43.166 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:43.166 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.166 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:43.166 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.166 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:43.166 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:43.166 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.166 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:43.166 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:43.166 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.166 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:43.166 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:43.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:43.434 rmmod nvme_tcp 00:14:43.434 rmmod nvme_fabrics 00:14:43.434 rmmod nvme_keyring 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1038574 ']' 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1038574 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1038574 ']' 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1038574 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1038574 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1038574' 00:14:43.434 killing process with pid 1038574 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1038574 00:14:43.434 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1038574 00:14:43.739 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:43.739 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:43.739 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:43.739 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:43.739 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:43.739 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:43.739 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:43.739 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:43.739 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:43.739 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.739 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.739 06:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.678 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:45.678 00:14:45.678 real 0m8.690s 00:14:45.678 user 0m16.636s 00:14:45.678 sys 0m2.329s 00:14:45.678 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:45.678 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.678 ************************************ 00:14:45.678 END TEST nvmf_nvme_cli 00:14:45.678 ************************************ 00:14:45.678 06:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:45.678 06:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:45.678 06:18:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:45.678 06:18:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.678 06:18:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:45.938 ************************************ 00:14:45.938 START TEST nvmf_vfio_user 00:14:45.938 ************************************ 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:45.938 * Looking for test storage... 00:14:45.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:45.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.938 --rc genhtml_branch_coverage=1 00:14:45.938 --rc genhtml_function_coverage=1 00:14:45.938 --rc genhtml_legend=1 00:14:45.938 --rc geninfo_all_blocks=1 00:14:45.938 --rc geninfo_unexecuted_blocks=1 00:14:45.938 00:14:45.938 ' 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:45.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.938 --rc genhtml_branch_coverage=1 00:14:45.938 --rc genhtml_function_coverage=1 00:14:45.938 --rc genhtml_legend=1 00:14:45.938 --rc geninfo_all_blocks=1 00:14:45.938 --rc geninfo_unexecuted_blocks=1 00:14:45.938 00:14:45.938 ' 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:45.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.938 --rc genhtml_branch_coverage=1 00:14:45.938 --rc genhtml_function_coverage=1 00:14:45.938 --rc genhtml_legend=1 00:14:45.938 --rc geninfo_all_blocks=1 00:14:45.938 --rc geninfo_unexecuted_blocks=1 00:14:45.938 00:14:45.938 ' 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:45.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.938 --rc genhtml_branch_coverage=1 00:14:45.938 --rc genhtml_function_coverage=1 00:14:45.938 --rc genhtml_legend=1 00:14:45.938 --rc geninfo_all_blocks=1 00:14:45.938 --rc geninfo_unexecuted_blocks=1 00:14:45.938 00:14:45.938 ' 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:45.938 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:45.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1039515 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1039515' 00:14:45.939 Process pid: 1039515 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1039515 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1039515 ']' 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.939 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:45.939 [2024-12-08 06:18:36.030272] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:14:45.939 [2024-12-08 06:18:36.030364] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.199 [2024-12-08 06:18:36.101609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:46.199 [2024-12-08 06:18:36.160352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.199 [2024-12-08 06:18:36.160422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.199 [2024-12-08 06:18:36.160435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.199 [2024-12-08 06:18:36.160446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.199 [2024-12-08 06:18:36.160456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.199 [2024-12-08 06:18:36.162260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.199 [2024-12-08 06:18:36.162321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.200 [2024-12-08 06:18:36.162387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:46.200 [2024-12-08 06:18:36.162390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.200 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.200 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:46.200 06:18:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:47.574 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:47.574 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:47.574 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:47.574 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:47.574 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:47.574 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:47.832 Malloc1 00:14:47.832 06:18:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:48.089 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:48.655 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:48.655 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:48.655 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:48.655 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:49.222 Malloc2 00:14:49.222 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:49.480 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:49.738 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:49.998 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:49.998 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:49.998 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:49.998 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:49.998 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:49.998 06:18:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:49.998 [2024-12-08 06:18:39.985533] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:14:49.998 [2024-12-08 06:18:39.985575] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039945 ] 00:14:49.998 [2024-12-08 06:18:40.039803] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:49.998 [2024-12-08 06:18:40.050306] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:49.998 [2024-12-08 06:18:40.050373] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc746947000 00:14:49.998 [2024-12-08 06:18:40.051289] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.998 [2024-12-08 06:18:40.052278] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.998 [2024-12-08 06:18:40.053275] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.998 [2024-12-08 06:18:40.054282] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:49.998 [2024-12-08 06:18:40.055284] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:49.998 [2024-12-08 06:18:40.056291] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.998 [2024-12-08 06:18:40.057292] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:49.998 [2024-12-08 06:18:40.058321] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.998 [2024-12-08 06:18:40.059329] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:49.998 [2024-12-08 06:18:40.059362] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc74693c000 00:14:49.998 [2024-12-08 06:18:40.060545] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:49.998 [2024-12-08 06:18:40.074598] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:49.998 [2024-12-08 06:18:40.074642] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:49.998 [2024-12-08 06:18:40.083468] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:49.998 [2024-12-08 06:18:40.083531] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:49.998 [2024-12-08 06:18:40.083661] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:49.998 [2024-12-08 06:18:40.083692] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:49.998 [2024-12-08 06:18:40.083719] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:49.998 [2024-12-08 06:18:40.084457] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:49.998 [2024-12-08 06:18:40.084483] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:49.998 [2024-12-08 06:18:40.084498] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:49.998 [2024-12-08 06:18:40.085459] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:49.998 [2024-12-08 06:18:40.085492] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:49.998 [2024-12-08 06:18:40.085508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:49.998 [2024-12-08 06:18:40.086467] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:49.998 [2024-12-08 06:18:40.086486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:49.998 [2024-12-08 06:18:40.087475] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:49.998 [2024-12-08 06:18:40.087495] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:49.999 [2024-12-08 06:18:40.087504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:49.999 [2024-12-08 06:18:40.087515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:49.999 [2024-12-08 06:18:40.087627] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:49.999 [2024-12-08 06:18:40.087635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:49.999 [2024-12-08 06:18:40.087644] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:49.999 [2024-12-08 06:18:40.088487] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:49.999 [2024-12-08 06:18:40.089482] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:49.999 [2024-12-08 06:18:40.090492] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:49.999 [2024-12-08 06:18:40.091489] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:49.999 [2024-12-08 06:18:40.091611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:49.999 [2024-12-08 06:18:40.092504] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:49.999 [2024-12-08 06:18:40.092523] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:49.999 [2024-12-08 06:18:40.092532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.092556] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:49.999 [2024-12-08 06:18:40.092575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.092613] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:49.999 [2024-12-08 06:18:40.092622] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.999 [2024-12-08 06:18:40.092636] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:49.999 [2024-12-08 06:18:40.092656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.999 [2024-12-08 06:18:40.092742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:49.999 [2024-12-08 06:18:40.092776] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:49.999 [2024-12-08 06:18:40.092786] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:49.999 [2024-12-08 06:18:40.092794] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:49.999 [2024-12-08 06:18:40.092806] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:49.999 [2024-12-08 06:18:40.092814] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:49.999 [2024-12-08 06:18:40.092822] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:49.999 [2024-12-08 06:18:40.092830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.092843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.092859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:49.999 [2024-12-08 06:18:40.092877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:49.999 [2024-12-08 06:18:40.092895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.999 [2024-12-08 06:18:40.092908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.999 [2024-12-08 06:18:40.092920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.999 [2024-12-08 06:18:40.092932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.999 [2024-12-08 06:18:40.092941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.092957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.092972] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:49.999 [2024-12-08 06:18:40.092984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:49.999 [2024-12-08 06:18:40.092995] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:49.999 [2024-12-08 06:18:40.093004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.093017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.093042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.093055] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:49.999 [2024-12-08 06:18:40.093071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:49.999 [2024-12-08 06:18:40.093142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.093159] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.093173] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:49.999 [2024-12-08 06:18:40.093194] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:49.999 [2024-12-08 06:18:40.093201] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:49.999 [2024-12-08 06:18:40.093210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:49.999 [2024-12-08 06:18:40.093226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:49.999 [2024-12-08 06:18:40.093246] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:49.999 [2024-12-08 06:18:40.093265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.093281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.093293] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:49.999 [2024-12-08 06:18:40.093301] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.999 [2024-12-08 06:18:40.093307] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:49.999 [2024-12-08 06:18:40.093316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.999 [2024-12-08 06:18:40.093356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:49.999 [2024-12-08 06:18:40.093380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.093395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.093422] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:49.999 [2024-12-08 06:18:40.093430] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.999 [2024-12-08 06:18:40.093436] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:49.999 [2024-12-08 06:18:40.093445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.999 [2024-12-08 06:18:40.093459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:49.999 [2024-12-08 06:18:40.093473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.093484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.093498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.093511] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.093520] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.093528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.093537] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:49.999 [2024-12-08 06:18:40.093547] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:49.999 [2024-12-08 06:18:40.093556] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:49.999 [2024-12-08 06:18:40.093583] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:49.999 [2024-12-08 06:18:40.093601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:49.999 [2024-12-08 06:18:40.093620] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:49.999 [2024-12-08 06:18:40.093632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:50.000 [2024-12-08 06:18:40.093648] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:50.000 [2024-12-08 06:18:40.093660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:50.000 [2024-12-08 06:18:40.093675] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:50.000 [2024-12-08 06:18:40.093690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:50.000 [2024-12-08 06:18:40.093748] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:50.000 [2024-12-08 06:18:40.093761] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:50.000 [2024-12-08 06:18:40.093768] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:50.000 [2024-12-08 06:18:40.093775] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:50.000 [2024-12-08 06:18:40.093781] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:50.000 [2024-12-08 06:18:40.093790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:50.000 [2024-12-08 06:18:40.093803] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:50.000 [2024-12-08 06:18:40.093811] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:50.000 [2024-12-08 06:18:40.093817] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.000 [2024-12-08 06:18:40.093826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:50.000 [2024-12-08 06:18:40.093838] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:50.000 [2024-12-08 06:18:40.093846] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:50.000 [2024-12-08 06:18:40.093852] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.000 [2024-12-08 06:18:40.093860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:50.000 [2024-12-08 06:18:40.093873] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:50.000 [2024-12-08 06:18:40.093881] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:50.000 [2024-12-08 06:18:40.093887] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.000 [2024-12-08 06:18:40.093896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:50.000 [2024-12-08 06:18:40.093915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:50.000 [2024-12-08 06:18:40.093937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:50.000 [2024-12-08 06:18:40.093956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:50.000 [2024-12-08 06:18:40.093969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:50.000 ===================================================== 00:14:50.000 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:50.000 ===================================================== 00:14:50.000 Controller Capabilities/Features 00:14:50.000 ================================ 00:14:50.000 Vendor ID: 4e58 00:14:50.000 Subsystem Vendor ID: 4e58 00:14:50.000 Serial Number: SPDK1 00:14:50.000 Model Number: SPDK bdev Controller 00:14:50.000 Firmware Version: 25.01 00:14:50.000 Recommended Arb Burst: 6 00:14:50.000 IEEE OUI Identifier: 8d 6b 50 00:14:50.000 Multi-path I/O 00:14:50.000 May have multiple subsystem ports: Yes 00:14:50.000 May have multiple controllers: Yes 00:14:50.000 Associated with SR-IOV VF: No 00:14:50.000 Max Data Transfer Size: 131072 00:14:50.000 Max Number of Namespaces: 32 00:14:50.000 Max Number of I/O Queues: 127 00:14:50.000 NVMe Specification Version (VS): 1.3 00:14:50.000 NVMe Specification Version (Identify): 1.3 00:14:50.000 Maximum Queue Entries: 256 00:14:50.000 Contiguous Queues Required: Yes 00:14:50.000 Arbitration Mechanisms Supported 00:14:50.000 Weighted Round Robin: Not Supported 00:14:50.000 Vendor Specific: Not Supported 00:14:50.000 Reset Timeout: 15000 ms 00:14:50.000 Doorbell Stride: 4 bytes 00:14:50.000 NVM Subsystem Reset: Not Supported 00:14:50.000 Command Sets Supported 00:14:50.000 NVM Command Set: Supported 00:14:50.000 Boot Partition: Not Supported 00:14:50.000 Memory Page Size Minimum: 4096 bytes 00:14:50.000 Memory Page Size Maximum: 4096 bytes 00:14:50.000 Persistent Memory Region: Not Supported 00:14:50.000 Optional Asynchronous Events Supported 00:14:50.000 Namespace Attribute Notices: Supported 00:14:50.000 Firmware Activation Notices: Not Supported 00:14:50.000 ANA Change Notices: Not Supported 00:14:50.000 PLE Aggregate Log Change Notices: Not Supported 00:14:50.000 LBA Status Info Alert Notices: Not Supported 00:14:50.000 EGE Aggregate Log Change Notices: Not Supported 00:14:50.000 Normal NVM Subsystem Shutdown event: Not Supported 00:14:50.000 Zone Descriptor Change Notices: Not Supported 00:14:50.000 Discovery Log Change Notices: Not Supported 00:14:50.000 Controller Attributes 00:14:50.000 128-bit Host Identifier: Supported 00:14:50.000 Non-Operational Permissive Mode: Not Supported 00:14:50.000 NVM Sets: Not Supported 00:14:50.000 Read Recovery Levels: Not Supported 00:14:50.000 Endurance Groups: Not Supported 00:14:50.000 Predictable Latency Mode: Not Supported 00:14:50.000 Traffic Based Keep ALive: Not Supported 00:14:50.000 Namespace Granularity: Not Supported 00:14:50.000 SQ Associations: Not Supported 00:14:50.000 UUID List: Not Supported 00:14:50.000 Multi-Domain Subsystem: Not Supported 00:14:50.000 Fixed Capacity Management: Not Supported 00:14:50.000 Variable Capacity Management: Not Supported 00:14:50.000 Delete Endurance Group: Not Supported 00:14:50.000 Delete NVM Set: Not Supported 00:14:50.000 Extended LBA Formats Supported: Not Supported 00:14:50.000 Flexible Data Placement Supported: Not Supported 00:14:50.000 00:14:50.000 Controller Memory Buffer Support 00:14:50.000 ================================ 00:14:50.000 Supported: No 00:14:50.000 00:14:50.000 Persistent Memory Region Support 00:14:50.000 ================================ 00:14:50.000 Supported: No 00:14:50.000 00:14:50.000 Admin Command Set Attributes 00:14:50.000 ============================ 00:14:50.000 Security Send/Receive: Not Supported 00:14:50.000 Format NVM: Not Supported 00:14:50.000 Firmware Activate/Download: Not Supported 00:14:50.000 Namespace Management: Not Supported 00:14:50.000 Device Self-Test: Not Supported 00:14:50.000 Directives: Not Supported 00:14:50.000 NVMe-MI: Not Supported 00:14:50.000 Virtualization Management: Not Supported 00:14:50.000 Doorbell Buffer Config: Not Supported 00:14:50.000 Get LBA Status Capability: Not Supported 00:14:50.000 Command & Feature Lockdown Capability: Not Supported 00:14:50.000 Abort Command Limit: 4 00:14:50.000 Async Event Request Limit: 4 00:14:50.000 Number of Firmware Slots: N/A 00:14:50.000 Firmware Slot 1 Read-Only: N/A 00:14:50.000 Firmware Activation Without Reset: N/A 00:14:50.000 Multiple Update Detection Support: N/A 00:14:50.000 Firmware Update Granularity: No Information Provided 00:14:50.000 Per-Namespace SMART Log: No 00:14:50.000 Asymmetric Namespace Access Log Page: Not Supported 00:14:50.000 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:50.000 Command Effects Log Page: Supported 00:14:50.000 Get Log Page Extended Data: Supported 00:14:50.000 Telemetry Log Pages: Not Supported 00:14:50.000 Persistent Event Log Pages: Not Supported 00:14:50.000 Supported Log Pages Log Page: May Support 00:14:50.000 Commands Supported & Effects Log Page: Not Supported 00:14:50.000 Feature Identifiers & Effects Log Page:May Support 00:14:50.000 NVMe-MI Commands & Effects Log Page: May Support 00:14:50.000 Data Area 4 for Telemetry Log: Not Supported 00:14:50.000 Error Log Page Entries Supported: 128 00:14:50.000 Keep Alive: Supported 00:14:50.000 Keep Alive Granularity: 10000 ms 00:14:50.000 00:14:50.000 NVM Command Set Attributes 00:14:50.000 ========================== 00:14:50.000 Submission Queue Entry Size 00:14:50.000 Max: 64 00:14:50.000 Min: 64 00:14:50.000 Completion Queue Entry Size 00:14:50.000 Max: 16 00:14:50.000 Min: 16 00:14:50.000 Number of Namespaces: 32 00:14:50.000 Compare Command: Supported 00:14:50.000 Write Uncorrectable Command: Not Supported 00:14:50.000 Dataset Management Command: Supported 00:14:50.000 Write Zeroes Command: Supported 00:14:50.000 Set Features Save Field: Not Supported 00:14:50.000 Reservations: Not Supported 00:14:50.000 Timestamp: Not Supported 00:14:50.000 Copy: Supported 00:14:50.000 Volatile Write Cache: Present 00:14:50.000 Atomic Write Unit (Normal): 1 00:14:50.000 Atomic Write Unit (PFail): 1 00:14:50.000 Atomic Compare & Write Unit: 1 00:14:50.000 Fused Compare & Write: Supported 00:14:50.000 Scatter-Gather List 00:14:50.000 SGL Command Set: Supported (Dword aligned) 00:14:50.000 SGL Keyed: Not Supported 00:14:50.000 SGL Bit Bucket Descriptor: Not Supported 00:14:50.000 SGL Metadata Pointer: Not Supported 00:14:50.000 Oversized SGL: Not Supported 00:14:50.000 SGL Metadata Address: Not Supported 00:14:50.000 SGL Offset: Not Supported 00:14:50.001 Transport SGL Data Block: Not Supported 00:14:50.001 Replay Protected Memory Block: Not Supported 00:14:50.001 00:14:50.001 Firmware Slot Information 00:14:50.001 ========================= 00:14:50.001 Active slot: 1 00:14:50.001 Slot 1 Firmware Revision: 25.01 00:14:50.001 00:14:50.001 00:14:50.001 Commands Supported and Effects 00:14:50.001 ============================== 00:14:50.001 Admin Commands 00:14:50.001 -------------- 00:14:50.001 Get Log Page (02h): Supported 00:14:50.001 Identify (06h): Supported 00:14:50.001 Abort (08h): Supported 00:14:50.001 Set Features (09h): Supported 00:14:50.001 Get Features (0Ah): Supported 00:14:50.001 Asynchronous Event Request (0Ch): Supported 00:14:50.001 Keep Alive (18h): Supported 00:14:50.001 I/O Commands 00:14:50.001 ------------ 00:14:50.001 Flush (00h): Supported LBA-Change 00:14:50.001 Write (01h): Supported LBA-Change 00:14:50.001 Read (02h): Supported 00:14:50.001 Compare (05h): Supported 00:14:50.001 Write Zeroes (08h): Supported LBA-Change 00:14:50.001 Dataset Management (09h): Supported LBA-Change 00:14:50.001 Copy (19h): Supported LBA-Change 00:14:50.001 00:14:50.001 Error Log 00:14:50.001 ========= 00:14:50.001 00:14:50.001 Arbitration 00:14:50.001 =========== 00:14:50.001 Arbitration Burst: 1 00:14:50.001 00:14:50.001 Power Management 00:14:50.001 ================ 00:14:50.001 Number of Power States: 1 00:14:50.001 Current Power State: Power State #0 00:14:50.001 Power State #0: 00:14:50.001 Max Power: 0.00 W 00:14:50.001 Non-Operational State: Operational 00:14:50.001 Entry Latency: Not Reported 00:14:50.001 Exit Latency: Not Reported 00:14:50.001 Relative Read Throughput: 0 00:14:50.001 Relative Read Latency: 0 00:14:50.001 Relative Write Throughput: 0 00:14:50.001 Relative Write Latency: 0 00:14:50.001 Idle Power: Not Reported 00:14:50.001 Active Power: Not Reported 00:14:50.001 Non-Operational Permissive Mode: Not Supported 00:14:50.001 00:14:50.001 Health Information 00:14:50.001 ================== 00:14:50.001 Critical Warnings: 00:14:50.001 Available Spare Space: OK 00:14:50.001 Temperature: OK 00:14:50.001 Device Reliability: OK 00:14:50.001 Read Only: No 00:14:50.001 Volatile Memory Backup: OK 00:14:50.001 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:50.001 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:50.001 Available Spare: 0% 00:14:50.001 Available Sp[2024-12-08 06:18:40.094110] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:50.001 [2024-12-08 06:18:40.094127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:50.001 [2024-12-08 06:18:40.094189] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:50.001 [2024-12-08 06:18:40.094208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.001 [2024-12-08 06:18:40.094219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.001 [2024-12-08 06:18:40.094229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.001 [2024-12-08 06:18:40.094238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.001 [2024-12-08 06:18:40.094528] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:50.001 [2024-12-08 06:18:40.094548] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:50.001 [2024-12-08 06:18:40.095518] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:50.001 [2024-12-08 06:18:40.095609] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:50.001 [2024-12-08 06:18:40.095623] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:50.001 [2024-12-08 06:18:40.096521] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:50.001 [2024-12-08 06:18:40.096545] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:50.001 [2024-12-08 06:18:40.096731] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:50.001 [2024-12-08 06:18:40.099750] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:50.261 are Threshold: 0% 00:14:50.261 Life Percentage Used: 0% 00:14:50.261 Data Units Read: 0 00:14:50.261 Data Units Written: 0 00:14:50.261 Host Read Commands: 0 00:14:50.261 Host Write Commands: 0 00:14:50.261 Controller Busy Time: 0 minutes 00:14:50.261 Power Cycles: 0 00:14:50.261 Power On Hours: 0 hours 00:14:50.261 Unsafe Shutdowns: 0 00:14:50.261 Unrecoverable Media Errors: 0 00:14:50.261 Lifetime Error Log Entries: 0 00:14:50.261 Warning Temperature Time: 0 minutes 00:14:50.261 Critical Temperature Time: 0 minutes 00:14:50.261 00:14:50.261 Number of Queues 00:14:50.261 ================ 00:14:50.261 Number of I/O Submission Queues: 127 00:14:50.261 Number of I/O Completion Queues: 127 00:14:50.261 00:14:50.261 Active Namespaces 00:14:50.261 ================= 00:14:50.261 Namespace ID:1 00:14:50.261 Error Recovery Timeout: Unlimited 00:14:50.261 Command Set Identifier: NVM (00h) 00:14:50.261 Deallocate: Supported 00:14:50.261 Deallocated/Unwritten Error: Not Supported 00:14:50.261 Deallocated Read Value: Unknown 00:14:50.261 Deallocate in Write Zeroes: Not Supported 00:14:50.261 Deallocated Guard Field: 0xFFFF 00:14:50.261 Flush: Supported 00:14:50.261 Reservation: Supported 00:14:50.261 Namespace Sharing Capabilities: Multiple Controllers 00:14:50.261 Size (in LBAs): 131072 (0GiB) 00:14:50.261 Capacity (in LBAs): 131072 (0GiB) 00:14:50.261 Utilization (in LBAs): 131072 (0GiB) 00:14:50.261 NGUID: 7E7F0F5EA2D747B2BCCB000DBBC7CED0 00:14:50.262 UUID: 7e7f0f5e-a2d7-47b2-bccb-000dbbc7ced0 00:14:50.262 Thin Provisioning: Not Supported 00:14:50.262 Per-NS Atomic Units: Yes 00:14:50.262 Atomic Boundary Size (Normal): 0 00:14:50.262 Atomic Boundary Size (PFail): 0 00:14:50.262 Atomic Boundary Offset: 0 00:14:50.262 Maximum Single Source Range Length: 65535 00:14:50.262 Maximum Copy Length: 65535 00:14:50.262 Maximum Source Range Count: 1 00:14:50.262 NGUID/EUI64 Never Reused: No 00:14:50.262 Namespace Write Protected: No 00:14:50.262 Number of LBA Formats: 1 00:14:50.262 Current LBA Format: LBA Format #00 00:14:50.262 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:50.262 00:14:50.262 06:18:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:50.262 [2024-12-08 06:18:40.359692] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:55.531 Initializing NVMe Controllers 00:14:55.531 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:55.531 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:55.531 Initialization complete. Launching workers. 00:14:55.531 ======================================================== 00:14:55.531 Latency(us) 00:14:55.531 Device Information : IOPS MiB/s Average min max 00:14:55.531 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 31156.61 121.71 4107.43 1234.50 7544.07 00:14:55.531 ======================================================== 00:14:55.531 Total : 31156.61 121.71 4107.43 1234.50 7544.07 00:14:55.531 00:14:55.531 [2024-12-08 06:18:45.385222] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:55.531 06:18:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:55.531 [2024-12-08 06:18:45.636427] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.833 Initializing NVMe Controllers 00:15:00.833 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:00.833 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:00.833 Initialization complete. Launching workers. 00:15:00.833 ======================================================== 00:15:00.833 Latency(us) 00:15:00.833 Device Information : IOPS MiB/s Average min max 00:15:00.833 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16028.80 62.61 7993.99 5670.35 11971.22 00:15:00.833 ======================================================== 00:15:00.833 Total : 16028.80 62.61 7993.99 5670.35 11971.22 00:15:00.833 00:15:00.833 [2024-12-08 06:18:50.672697] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.833 06:18:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:00.833 [2024-12-08 06:18:50.904825] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:06.111 [2024-12-08 06:18:55.975081] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:06.111 Initializing NVMe Controllers 00:15:06.111 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:06.111 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:06.111 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:06.111 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:06.111 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:06.111 Initialization complete. Launching workers. 00:15:06.111 Starting thread on core 2 00:15:06.111 Starting thread on core 3 00:15:06.111 Starting thread on core 1 00:15:06.111 06:18:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:06.369 [2024-12-08 06:18:56.305258] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:09.661 [2024-12-08 06:18:59.375539] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:09.661 Initializing NVMe Controllers 00:15:09.661 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.661 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.661 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:09.661 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:09.661 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:09.661 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:09.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:09.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:09.661 Initialization complete. Launching workers. 00:15:09.661 Starting thread on core 1 with urgent priority queue 00:15:09.661 Starting thread on core 2 with urgent priority queue 00:15:09.661 Starting thread on core 3 with urgent priority queue 00:15:09.661 Starting thread on core 0 with urgent priority queue 00:15:09.661 SPDK bdev Controller (SPDK1 ) core 0: 5111.33 IO/s 19.56 secs/100000 ios 00:15:09.661 SPDK bdev Controller (SPDK1 ) core 1: 4885.67 IO/s 20.47 secs/100000 ios 00:15:09.661 SPDK bdev Controller (SPDK1 ) core 2: 5335.67 IO/s 18.74 secs/100000 ios 00:15:09.661 SPDK bdev Controller (SPDK1 ) core 3: 5054.00 IO/s 19.79 secs/100000 ios 00:15:09.661 ======================================================== 00:15:09.661 00:15:09.661 06:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:09.661 [2024-12-08 06:18:59.682243] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:09.661 Initializing NVMe Controllers 00:15:09.661 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.661 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.661 Namespace ID: 1 size: 0GB 00:15:09.661 Initialization complete. 00:15:09.661 INFO: using host memory buffer for IO 00:15:09.661 Hello world! 00:15:09.661 [2024-12-08 06:18:59.716912] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:09.661 06:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:09.921 [2024-12-08 06:19:00.030273] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.307 Initializing NVMe Controllers 00:15:11.307 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.307 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.307 Initialization complete. Launching workers. 00:15:11.307 submit (in ns) avg, min, max = 9064.3, 3518.9, 4016595.6 00:15:11.307 complete (in ns) avg, min, max = 26783.6, 2063.3, 4019594.4 00:15:11.307 00:15:11.307 Submit histogram 00:15:11.307 ================ 00:15:11.307 Range in us Cumulative Count 00:15:11.307 3.508 - 3.532: 0.0331% ( 4) 00:15:11.307 3.532 - 3.556: 0.2235% ( 23) 00:15:11.307 3.556 - 3.579: 1.0679% ( 102) 00:15:11.307 3.579 - 3.603: 3.1623% ( 253) 00:15:11.307 3.603 - 3.627: 7.3179% ( 502) 00:15:11.307 3.627 - 3.650: 14.3377% ( 848) 00:15:11.307 3.650 - 3.674: 22.0447% ( 931) 00:15:11.307 3.674 - 3.698: 29.5033% ( 901) 00:15:11.307 3.698 - 3.721: 35.5050% ( 725) 00:15:11.307 3.721 - 3.745: 40.2980% ( 579) 00:15:11.307 3.745 - 3.769: 45.5050% ( 629) 00:15:11.307 3.769 - 3.793: 50.3808% ( 589) 00:15:11.307 3.793 - 3.816: 54.3957% ( 485) 00:15:11.307 3.816 - 3.840: 58.1043% ( 448) 00:15:11.307 3.840 - 3.864: 61.9040% ( 459) 00:15:11.307 3.864 - 3.887: 66.2334% ( 523) 00:15:11.307 3.887 - 3.911: 70.6126% ( 529) 00:15:11.307 3.911 - 3.935: 75.2980% ( 566) 00:15:11.307 3.935 - 3.959: 78.7666% ( 419) 00:15:11.307 3.959 - 3.982: 81.6391% ( 347) 00:15:11.307 3.982 - 4.006: 84.2219% ( 312) 00:15:11.307 4.006 - 4.030: 86.0182% ( 217) 00:15:11.307 4.030 - 4.053: 87.5414% ( 184) 00:15:11.307 4.053 - 4.077: 88.8825% ( 162) 00:15:11.307 4.077 - 4.101: 90.0579% ( 142) 00:15:11.307 4.101 - 4.124: 91.2666% ( 146) 00:15:11.307 4.124 - 4.148: 92.4172% ( 139) 00:15:11.307 4.148 - 4.172: 93.2616% ( 102) 00:15:11.307 4.172 - 4.196: 93.9570% ( 84) 00:15:11.307 4.196 - 4.219: 94.6109% ( 79) 00:15:11.307 4.219 - 4.243: 95.0083% ( 48) 00:15:11.307 4.243 - 4.267: 95.3228% ( 38) 00:15:11.307 4.267 - 4.290: 95.5795% ( 31) 00:15:11.307 4.290 - 4.314: 95.7285% ( 18) 00:15:11.307 4.314 - 4.338: 95.8858% ( 19) 00:15:11.307 4.338 - 4.361: 96.0513% ( 20) 00:15:11.307 4.361 - 4.385: 96.1424% ( 11) 00:15:11.307 4.385 - 4.409: 96.2583% ( 14) 00:15:11.307 4.409 - 4.433: 96.3328% ( 9) 00:15:11.307 4.433 - 4.456: 96.3576% ( 3) 00:15:11.307 4.456 - 4.480: 96.3742% ( 2) 00:15:11.307 4.480 - 4.504: 96.4073% ( 4) 00:15:11.307 4.504 - 4.527: 96.4321% ( 3) 00:15:11.307 4.527 - 4.551: 96.4570% ( 3) 00:15:11.307 4.551 - 4.575: 96.4652% ( 1) 00:15:11.307 4.599 - 4.622: 96.4818% ( 2) 00:15:11.307 4.646 - 4.670: 96.4983% ( 2) 00:15:11.307 4.670 - 4.693: 96.5149% ( 2) 00:15:11.307 4.693 - 4.717: 96.5315% ( 2) 00:15:11.307 4.717 - 4.741: 96.5397% ( 1) 00:15:11.307 4.741 - 4.764: 96.5480% ( 1) 00:15:11.307 4.764 - 4.788: 96.5894% ( 5) 00:15:11.307 4.788 - 4.812: 96.6391% ( 6) 00:15:11.307 4.812 - 4.836: 96.6887% ( 6) 00:15:11.307 4.836 - 4.859: 96.7384% ( 6) 00:15:11.307 4.859 - 4.883: 96.7550% ( 2) 00:15:11.307 4.883 - 4.907: 96.7798% ( 3) 00:15:11.307 4.907 - 4.930: 96.8129% ( 4) 00:15:11.307 4.930 - 4.954: 96.8791% ( 8) 00:15:11.307 4.954 - 4.978: 96.9288% ( 6) 00:15:11.307 4.978 - 5.001: 96.9785% ( 6) 00:15:11.307 5.001 - 5.025: 97.0199% ( 5) 00:15:11.307 5.025 - 5.049: 97.0861% ( 8) 00:15:11.307 5.049 - 5.073: 97.1275% ( 5) 00:15:11.307 5.073 - 5.096: 97.1606% ( 4) 00:15:11.307 5.096 - 5.120: 97.2020% ( 5) 00:15:11.307 5.120 - 5.144: 97.2268% ( 3) 00:15:11.307 5.144 - 5.167: 97.2848% ( 7) 00:15:11.307 5.167 - 5.191: 97.3179% ( 4) 00:15:11.307 5.191 - 5.215: 97.3262% ( 1) 00:15:11.307 5.215 - 5.239: 97.3344% ( 1) 00:15:11.307 5.239 - 5.262: 97.3510% ( 2) 00:15:11.307 5.262 - 5.286: 97.3758% ( 3) 00:15:11.307 5.286 - 5.310: 97.3924% ( 2) 00:15:11.307 5.333 - 5.357: 97.4172% ( 3) 00:15:11.307 5.357 - 5.381: 97.4421% ( 3) 00:15:11.307 5.428 - 5.452: 97.4669% ( 3) 00:15:11.307 5.452 - 5.476: 97.4752% ( 1) 00:15:11.307 5.476 - 5.499: 97.4834% ( 1) 00:15:11.307 5.547 - 5.570: 97.4917% ( 1) 00:15:11.307 5.594 - 5.618: 97.5000% ( 1) 00:15:11.307 5.618 - 5.641: 97.5083% ( 1) 00:15:11.307 5.665 - 5.689: 97.5166% ( 1) 00:15:11.307 5.736 - 5.760: 97.5331% ( 2) 00:15:11.307 5.760 - 5.784: 97.5414% ( 1) 00:15:11.307 5.807 - 5.831: 97.5497% ( 1) 00:15:11.307 5.831 - 5.855: 97.5579% ( 1) 00:15:11.307 5.902 - 5.926: 97.5662% ( 1) 00:15:11.307 6.044 - 6.068: 97.5745% ( 1) 00:15:11.307 6.116 - 6.163: 97.5828% ( 1) 00:15:11.307 6.163 - 6.210: 97.6076% ( 3) 00:15:11.307 6.210 - 6.258: 97.6242% ( 2) 00:15:11.307 6.258 - 6.305: 97.6325% ( 1) 00:15:11.307 6.495 - 6.542: 97.6407% ( 1) 00:15:11.307 6.684 - 6.732: 97.6490% ( 1) 00:15:11.307 6.921 - 6.969: 97.6573% ( 1) 00:15:11.307 6.969 - 7.016: 97.6656% ( 1) 00:15:11.307 7.064 - 7.111: 97.6738% ( 1) 00:15:11.307 7.159 - 7.206: 97.6904% ( 2) 00:15:11.307 7.253 - 7.301: 97.6987% ( 1) 00:15:11.307 7.443 - 7.490: 97.7070% ( 1) 00:15:11.307 7.538 - 7.585: 97.7235% ( 2) 00:15:11.307 7.775 - 7.822: 97.7318% ( 1) 00:15:11.307 7.964 - 8.012: 97.7401% ( 1) 00:15:11.308 8.012 - 8.059: 97.7483% ( 1) 00:15:11.308 8.249 - 8.296: 97.7566% ( 1) 00:15:11.308 8.296 - 8.344: 97.7649% ( 1) 00:15:11.308 8.486 - 8.533: 97.7732% ( 1) 00:15:11.308 8.581 - 8.628: 97.7815% ( 1) 00:15:11.308 8.628 - 8.676: 97.7897% ( 1) 00:15:11.308 8.676 - 8.723: 97.7980% ( 1) 00:15:11.308 8.723 - 8.770: 97.8146% ( 2) 00:15:11.308 8.770 - 8.818: 97.8228% ( 1) 00:15:11.308 8.818 - 8.865: 97.8311% ( 1) 00:15:11.308 8.960 - 9.007: 97.8394% ( 1) 00:15:11.308 9.007 - 9.055: 97.8560% ( 2) 00:15:11.308 9.055 - 9.102: 97.8642% ( 1) 00:15:11.308 9.150 - 9.197: 97.8808% ( 2) 00:15:11.308 9.197 - 9.244: 97.8891% ( 1) 00:15:11.308 9.292 - 9.339: 97.8974% ( 1) 00:15:11.308 9.339 - 9.387: 97.9056% ( 1) 00:15:11.308 9.434 - 9.481: 97.9139% ( 1) 00:15:11.308 9.529 - 9.576: 97.9222% ( 1) 00:15:11.308 9.576 - 9.624: 97.9305% ( 1) 00:15:11.308 9.624 - 9.671: 97.9387% ( 1) 00:15:11.308 9.671 - 9.719: 97.9470% ( 1) 00:15:11.308 9.719 - 9.766: 97.9553% ( 1) 00:15:11.308 9.766 - 9.813: 97.9719% ( 2) 00:15:11.308 9.813 - 9.861: 98.0132% ( 5) 00:15:11.308 9.908 - 9.956: 98.0215% ( 1) 00:15:11.308 10.098 - 10.145: 98.0381% ( 2) 00:15:11.308 10.193 - 10.240: 98.0546% ( 2) 00:15:11.308 10.287 - 10.335: 98.0712% ( 2) 00:15:11.308 10.335 - 10.382: 98.0795% ( 1) 00:15:11.308 10.430 - 10.477: 98.1043% ( 3) 00:15:11.308 10.477 - 10.524: 98.1209% ( 2) 00:15:11.308 10.524 - 10.572: 98.1291% ( 1) 00:15:11.308 10.572 - 10.619: 98.1457% ( 2) 00:15:11.308 10.667 - 10.714: 98.1540% ( 1) 00:15:11.308 10.714 - 10.761: 98.1705% ( 2) 00:15:11.308 10.761 - 10.809: 98.1871% ( 2) 00:15:11.308 10.904 - 10.951: 98.1954% ( 1) 00:15:11.308 11.236 - 11.283: 98.2119% ( 2) 00:15:11.308 11.425 - 11.473: 98.2202% ( 1) 00:15:11.308 11.520 - 11.567: 98.2450% ( 3) 00:15:11.308 11.615 - 11.662: 98.2533% ( 1) 00:15:11.308 11.710 - 11.757: 98.2616% ( 1) 00:15:11.308 11.757 - 11.804: 98.2699% ( 1) 00:15:11.308 12.231 - 12.326: 98.2864% ( 2) 00:15:11.308 12.326 - 12.421: 98.3030% ( 2) 00:15:11.308 12.421 - 12.516: 98.3113% ( 1) 00:15:11.308 12.610 - 12.705: 98.3278% ( 2) 00:15:11.308 12.705 - 12.800: 98.3526% ( 3) 00:15:11.308 12.800 - 12.895: 98.3692% ( 2) 00:15:11.308 13.274 - 13.369: 98.3858% ( 2) 00:15:11.308 13.464 - 13.559: 98.4023% ( 2) 00:15:11.308 13.843 - 13.938: 98.4106% ( 1) 00:15:11.308 14.127 - 14.222: 98.4272% ( 2) 00:15:11.308 14.222 - 14.317: 98.4354% ( 1) 00:15:11.308 14.317 - 14.412: 98.4437% ( 1) 00:15:11.308 14.507 - 14.601: 98.4685% ( 3) 00:15:11.308 14.601 - 14.696: 98.4768% ( 1) 00:15:11.308 14.696 - 14.791: 98.4934% ( 2) 00:15:11.308 14.791 - 14.886: 98.5017% ( 1) 00:15:11.308 14.886 - 14.981: 98.5099% ( 1) 00:15:11.308 15.076 - 15.170: 98.5182% ( 1) 00:15:11.308 15.360 - 15.455: 98.5265% ( 1) 00:15:11.308 16.972 - 17.067: 98.5348% ( 1) 00:15:11.308 17.067 - 17.161: 98.5430% ( 1) 00:15:11.308 17.161 - 17.256: 98.5513% ( 1) 00:15:11.308 17.256 - 17.351: 98.5762% ( 3) 00:15:11.308 17.351 - 17.446: 98.6175% ( 5) 00:15:11.308 17.446 - 17.541: 98.6589% ( 5) 00:15:11.308 17.541 - 17.636: 98.7169% ( 7) 00:15:11.308 17.636 - 17.730: 98.7583% ( 5) 00:15:11.308 17.730 - 17.825: 98.7831% ( 3) 00:15:11.308 17.825 - 17.920: 98.8659% ( 10) 00:15:11.308 17.920 - 18.015: 98.9321% ( 8) 00:15:11.308 18.015 - 18.110: 99.0315% ( 12) 00:15:11.308 18.110 - 18.204: 99.1474% ( 14) 00:15:11.308 18.204 - 18.299: 99.2053% ( 7) 00:15:11.308 18.299 - 18.394: 99.2798% ( 9) 00:15:11.308 18.394 - 18.489: 99.3626% ( 10) 00:15:11.308 18.489 - 18.584: 99.4950% ( 16) 00:15:11.308 18.584 - 18.679: 99.6026% ( 13) 00:15:11.308 18.679 - 18.773: 99.6275% ( 3) 00:15:11.308 18.773 - 18.868: 99.6523% ( 3) 00:15:11.308 18.868 - 18.963: 99.6854% ( 4) 00:15:11.308 18.963 - 19.058: 99.7185% ( 4) 00:15:11.308 19.058 - 19.153: 99.7268% ( 1) 00:15:11.308 19.153 - 19.247: 99.7351% ( 1) 00:15:11.308 19.247 - 19.342: 99.7517% ( 2) 00:15:11.308 19.342 - 19.437: 99.7682% ( 2) 00:15:11.308 19.437 - 19.532: 99.7765% ( 1) 00:15:11.308 19.627 - 19.721: 99.7848% ( 1) 00:15:11.308 20.006 - 20.101: 99.7930% ( 1) 00:15:11.308 20.196 - 20.290: 99.8013% ( 1) 00:15:11.308 20.480 - 20.575: 99.8096% ( 1) 00:15:11.308 21.239 - 21.333: 99.8179% ( 1) 00:15:11.308 22.566 - 22.661: 99.8262% ( 1) 00:15:11.308 22.661 - 22.756: 99.8344% ( 1) 00:15:11.308 23.893 - 23.988: 99.8427% ( 1) 00:15:11.308 24.462 - 24.652: 99.8510% ( 1) 00:15:11.308 26.738 - 26.927: 99.8593% ( 1) 00:15:11.308 27.496 - 27.686: 99.8675% ( 1) 00:15:11.308 29.013 - 29.203: 99.8758% ( 1) 00:15:11.308 3980.705 - 4004.978: 99.9338% ( 7) 00:15:11.308 4004.978 - 4029.250: 100.0000% ( 8) 00:15:11.308 00:15:11.308 Complete histogram 00:15:11.308 ================== 00:15:11.308 Range in us Cumulative Count 00:15:11.308 2.062 - 2.074: 2.6573% ( 321) 00:15:11.308 2.074 - 2.086: 25.8940% ( 2807) 00:15:11.308 2.086 - 2.098: 30.0248% ( 499) 00:15:11.308 2.098 - 2.110: 35.9768% ( 719) 00:15:11.308 2.110 - 2.121: 46.4156% ( 1261) 00:15:11.308 2.121 - 2.133: 48.0795% ( 201) 00:15:11.308 2.133 - 2.145: 54.6772% ( 797) 00:15:11.308 2.145 - 2.157: 63.5844% ( 1076) 00:15:11.308 2.157 - 2.169: 64.8427% ( 152) 00:15:11.308 2.169 - 2.181: 68.9652% ( 498) 00:15:11.308 2.181 - 2.193: 72.5166% ( 429) 00:15:11.308 2.193 - 2.204: 73.3195% ( 97) 00:15:11.308 2.204 - 2.216: 76.1755% ( 345) 00:15:11.308 2.216 - 2.228: 82.3179% ( 742) 00:15:11.308 2.228 - 2.240: 84.8262% ( 303) 00:15:11.308 2.240 - 2.252: 87.4007% ( 311) 00:15:11.308 2.252 - 2.264: 90.0497% ( 320) 00:15:11.308 2.264 - 2.276: 90.7202% ( 81) 00:15:11.308 2.276 - 2.287: 91.4983% ( 94) 00:15:11.308 2.287 - 2.299: 92.8063% ( 158) 00:15:11.308 2.299 - 2.311: 94.0646% ( 152) 00:15:11.308 2.311 - 2.323: 94.6026% ( 65) 00:15:11.308 2.323 - 2.335: 94.6854% ( 10) 00:15:11.308 2.335 - 2.347: 94.7517% ( 8) 00:15:11.308 2.347 - 2.359: 94.8262% ( 9) 00:15:11.308 2.359 - 2.370: 94.9669% ( 17) 00:15:11.308 2.370 - 2.382: 95.3311% ( 44) 00:15:11.308 2.382 - 2.394: 95.7368% ( 49) 00:15:11.308 2.394 - 2.406: 95.9851% ( 30) 00:15:11.308 2.406 - 2.418: 96.1672% ( 22) 00:15:11.308 2.418 - 2.430: 96.3825% ( 26) 00:15:11.308 2.430 - 2.441: 96.5728% ( 23) 00:15:11.308 2.441 - 2.453: 96.7715% ( 24) 00:15:11.308 2.453 - 2.465: 96.9785% ( 25) 00:15:11.308 2.465 - 2.477: 97.1440% ( 20) 00:15:11.308 2.477 - 2.489: 97.3096% ( 20) 00:15:11.308 2.489 - 2.501: 97.5414% ( 28) 00:15:11.308 2.501 - 2.513: 97.6738% ( 16) 00:15:11.308 2.513 - 2.524: 97.7318% ( 7) 00:15:11.308 2.524 - 2.536: 97.7732% ( 5) 00:15:11.308 2.536 - 2.548: 97.8642% ( 11) 00:15:11.308 2.548 - 2.560: 97.9305% ( 8) 00:15:11.308 2.560 - 2.572: 98.0215% ( 11) 00:15:11.308 2.572 - 2.584: 98.0877% ( 8) 00:15:11.308 2.584 - 2.596: 98.1374% ( 6) 00:15:11.308 2.596 - 2.607: 98.1457% ( 1) 00:15:11.308 2.607 - 2.619: 98.1705% ( 3) 00:15:11.308 2.619 - 2.631: 98.1954% ( 3) 00:15:11.308 2.631 - 2.643: 98.2285% ( 4) 00:15:11.308 2.643 - 2.655: 98.2450% ( 2) 00:15:11.308 2.655 - 2.667: 98.2533% ( 1) 00:15:11.308 2.667 - 2.679: 98.2616% ( 1) 00:15:11.308 2.690 - 2.702: 98.2699% ( 1) 00:15:11.308 2.738 - 2.750: 98.2781% ( 1) 00:15:11.308 2.773 - 2.785: 98.2864% ( 1) 00:15:11.308 2.868 - 2.880: 98.2947% ( 1) 00:15:11.308 2.963 - 2.975: 98.3113% ( 2) 00:15:11.308 2.975 - 2.987: 98.3195% ( 1) 00:15:11.308 3.034 - 3.058: 98.3278% ( 1) 00:15:11.308 3.105 - 3.129: 98.3361% ( 1) 00:15:11.308 3.153 - 3.176: 98.3444% ( 1) 00:15:11.308 3.271 - 3.295: 98.3609% ( 2) 00:15:11.308 3.295 - 3.319: 98.3692% ( 1) 00:15:11.308 3.319 - 3.342: 98.3775% ( 1) 00:15:11.308 3.342 - 3.366: 98.3858% ( 1) 00:15:11.308 3.461 - 3.484: 98.4106% ( 3) 00:15:11.308 3.508 - 3.532: 98.4189% ( 1) 00:15:11.308 3.532 - 3.556: 98.4354% ( 2) 00:15:11.308 3.556 - 3.579: 98.4520% ( 2) 00:15:11.308 3.579 - 3.603: 98.4603% ( 1) 00:15:11.308 3.603 - 3.627: 98.4685% ( 1) 00:15:11.308 3.674 - 3.698: 98.4851% ( 2) 00:15:11.308 3.698 - 3.721: 98.4934% ( 1) 00:15:11.308 3.721 - 3.745: 98.5182% ( 3) 00:15:11.308 3.745 - 3.769: 98.5348% ( 2) 00:15:11.308 3.769 - 3.793: 98.5430% ( 1) 00:15:11.308 3.793 - 3.816: 98.5513% ( 1) 00:15:11.308 3.840 - 3.864: 98.5596% ( 1) 00:15:11.308 3.864 - 3.887: 98.5679% ( 1) 00:15:11.309 3.935 - 3.959: 98.5762% ( 1) 00:15:11.309 4.148 - 4.172: 98.5844% ( 1) 00:15:11.309 4.599 - 4.622: 98.5927% ( 1) 00:15:11.309 5.879 - 5.902: 98.6010% ( 1) 00:15:11.309 6.542 - 6.590: 98.6093% ( 1) 00:15:11.309 6.732 - 6.779: 98.6175% ( 1) 00:15:11.309 6.827 - 6.874: 98.6258% ( 1) 00:15:11.309 6.921 - 6.969: 98.6341% ( 1) 00:15:11.309 7.064 - 7.111: 98.6507% ( 2) 00:15:11.309 7.348 - 7.396: 98.6589% ( 1) 00:15:11.309 7.396 - 7.443: 98.6672% ( 1) 00:15:11.309 7.727 - 7.775: 98.6755% ( 1) 00:15:11.309 7.964 - 8.012: 98.6838% ( 1) 00:15:11.309 8.012 - 8.059: 98.6921% ( 1) 00:15:11.309 8.154 - 8.201: 98.7003% ( 1) 00:15:11.309 8.344 - 8.391: 98.7086% ( 1) 00:15:11.309 8.486 - 8.533: 98.7169% ( 1) 00:15:11.309 8.628 - 8.676: 98.7252% ( 1) 00:15:11.309 8.676 - 8.723: 98.7334% ( 1) 00:15:11.309 8.770 - 8.818: 98.7500% ( 2) 00:15:11.309 9.055 - 9.102: 98.7583% ( 1) 00:15:11.309 11.710 - 11.757: 98.7666% ( 1) 00:15:11.309 14.317 - 14.412: 98.7748% ( 1) 00:15:11.309 15.644 - 15.739: 98.7831% ( 1) 00:15:11.309 15.739 - 15.834: 98.8079% ( 3) 00:15:11.309 15.834 - 15.929: 98.8245% ( 2) 00:15:11.309 15.929 - 16.024: 98.8576% ( 4) 00:15:11.309 16.024 - 16.119: 98.8825% ( 3) 00:15:11.309 16.119 - 16.213: 98.9073% ( 3) 00:15:11.309 16.213 - 16.308: 98.9487% ( 5) 00:15:11.309 16.308 - 16.403: 99.0149% ( 8) 00:15:11.309 16.403 - 16.498: 99.0646% ( 6) 00:15:11.309 16.498 - 16.593: 99.1060% ( 5) 00:15:11.309 16.593 - 16.687: 99.1308% ( 3) 00:15:11.309 16.687 - 16.782: 99.1639% ( 4) 00:15:11.309 16.782 - 16.877: 99.2053% ( 5) 00:15:11.309 16.877 - 16.972: 99.2467% ( 5) 00:15:11.309 16.972 - 17.067: 99.2550% ( 1) 00:15:11.309 17.067 - 17.161: 99.2798% ( 3) 00:15:11.309 17.161 - 17.256: 99.2881% ( 1) 00:15:11.309 17.351 - 17.446: 99.2964% ( 1) 00:15:11.309 17.446 - 17.541: 99.3046% ( 1) 00:15:11.309 17.730 - 17.825: 99.3129% ( 1) 00:15:11.309 17.825 - 17.920: 99.3295% ( 2) 00:15:11.309 17.920 - 18.015: 99.3460% ( 2) 00:15:11.309 18.110 - 18.204: 99.3543% ( 1) 00:15:11.309 18.204 - 18.299: 99.3626% ( 1) 00:15:11.309 18.299 - 18.394: 99.3709% ( 1) 00:15:11.309 18.868 - 18.963: 99.3791% ( 1) 00:15:11.309 20.101 - 20.196: 99.3874% ( 1) 00:15:11.309 3980.705 - 4004.978: 99.7930% ( 49) 00:15:11.309 4004.978 - 4029.250: 100.0000%[2024-12-08 06:19:01.051867] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.309 ( 25) 00:15:11.309 00:15:11.309 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:11.309 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:11.309 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:11.309 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:11.309 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:11.309 [ 00:15:11.309 { 00:15:11.309 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:11.309 "subtype": "Discovery", 00:15:11.309 "listen_addresses": [], 00:15:11.309 "allow_any_host": true, 00:15:11.309 "hosts": [] 00:15:11.309 }, 00:15:11.309 { 00:15:11.309 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:11.309 "subtype": "NVMe", 00:15:11.309 "listen_addresses": [ 00:15:11.309 { 00:15:11.309 "trtype": "VFIOUSER", 00:15:11.309 "adrfam": "IPv4", 00:15:11.309 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:11.309 "trsvcid": "0" 00:15:11.309 } 00:15:11.309 ], 00:15:11.309 "allow_any_host": true, 00:15:11.309 "hosts": [], 00:15:11.309 "serial_number": "SPDK1", 00:15:11.309 "model_number": "SPDK bdev Controller", 00:15:11.309 "max_namespaces": 32, 00:15:11.309 "min_cntlid": 1, 00:15:11.309 "max_cntlid": 65519, 00:15:11.309 "namespaces": [ 00:15:11.309 { 00:15:11.309 "nsid": 1, 00:15:11.309 "bdev_name": "Malloc1", 00:15:11.309 "name": "Malloc1", 00:15:11.309 "nguid": "7E7F0F5EA2D747B2BCCB000DBBC7CED0", 00:15:11.309 "uuid": "7e7f0f5e-a2d7-47b2-bccb-000dbbc7ced0" 00:15:11.309 } 00:15:11.309 ] 00:15:11.309 }, 00:15:11.309 { 00:15:11.309 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:11.309 "subtype": "NVMe", 00:15:11.309 "listen_addresses": [ 00:15:11.309 { 00:15:11.309 "trtype": "VFIOUSER", 00:15:11.309 "adrfam": "IPv4", 00:15:11.309 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:11.309 "trsvcid": "0" 00:15:11.309 } 00:15:11.309 ], 00:15:11.309 "allow_any_host": true, 00:15:11.309 "hosts": [], 00:15:11.309 "serial_number": "SPDK2", 00:15:11.309 "model_number": "SPDK bdev Controller", 00:15:11.309 "max_namespaces": 32, 00:15:11.309 "min_cntlid": 1, 00:15:11.309 "max_cntlid": 65519, 00:15:11.309 "namespaces": [ 00:15:11.309 { 00:15:11.309 "nsid": 1, 00:15:11.309 "bdev_name": "Malloc2", 00:15:11.309 "name": "Malloc2", 00:15:11.309 "nguid": "76C3E51DE1D94D3EB4A4FD44522EEC9A", 00:15:11.309 "uuid": "76c3e51d-e1d9-4d3e-b4a4-fd44522eec9a" 00:15:11.309 } 00:15:11.309 ] 00:15:11.309 } 00:15:11.309 ] 00:15:11.309 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:11.309 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1042463 00:15:11.309 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:11.309 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:11.309 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:11.309 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:11.309 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:11.309 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:11.309 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:11.309 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:11.568 [2024-12-08 06:19:01.571729] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.825 Malloc3 00:15:11.825 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:12.084 [2024-12-08 06:19:01.950584] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:12.084 06:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:12.084 Asynchronous Event Request test 00:15:12.084 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:12.084 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:12.084 Registering asynchronous event callbacks... 00:15:12.084 Starting namespace attribute notice tests for all controllers... 00:15:12.084 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:12.084 aer_cb - Changed Namespace 00:15:12.084 Cleaning up... 00:15:12.344 [ 00:15:12.344 { 00:15:12.344 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:12.344 "subtype": "Discovery", 00:15:12.344 "listen_addresses": [], 00:15:12.344 "allow_any_host": true, 00:15:12.344 "hosts": [] 00:15:12.344 }, 00:15:12.344 { 00:15:12.344 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:12.344 "subtype": "NVMe", 00:15:12.344 "listen_addresses": [ 00:15:12.344 { 00:15:12.344 "trtype": "VFIOUSER", 00:15:12.344 "adrfam": "IPv4", 00:15:12.344 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:12.344 "trsvcid": "0" 00:15:12.344 } 00:15:12.344 ], 00:15:12.344 "allow_any_host": true, 00:15:12.344 "hosts": [], 00:15:12.344 "serial_number": "SPDK1", 00:15:12.344 "model_number": "SPDK bdev Controller", 00:15:12.344 "max_namespaces": 32, 00:15:12.344 "min_cntlid": 1, 00:15:12.344 "max_cntlid": 65519, 00:15:12.344 "namespaces": [ 00:15:12.344 { 00:15:12.344 "nsid": 1, 00:15:12.344 "bdev_name": "Malloc1", 00:15:12.344 "name": "Malloc1", 00:15:12.344 "nguid": "7E7F0F5EA2D747B2BCCB000DBBC7CED0", 00:15:12.344 "uuid": "7e7f0f5e-a2d7-47b2-bccb-000dbbc7ced0" 00:15:12.344 }, 00:15:12.344 { 00:15:12.344 "nsid": 2, 00:15:12.344 "bdev_name": "Malloc3", 00:15:12.344 "name": "Malloc3", 00:15:12.344 "nguid": "846EBEE8BF8E4D06B805BE0DB18BDA62", 00:15:12.344 "uuid": "846ebee8-bf8e-4d06-b805-be0db18bda62" 00:15:12.344 } 00:15:12.344 ] 00:15:12.344 }, 00:15:12.344 { 00:15:12.344 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:12.344 "subtype": "NVMe", 00:15:12.344 "listen_addresses": [ 00:15:12.344 { 00:15:12.344 "trtype": "VFIOUSER", 00:15:12.344 "adrfam": "IPv4", 00:15:12.344 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:12.344 "trsvcid": "0" 00:15:12.344 } 00:15:12.344 ], 00:15:12.345 "allow_any_host": true, 00:15:12.345 "hosts": [], 00:15:12.345 "serial_number": "SPDK2", 00:15:12.345 "model_number": "SPDK bdev Controller", 00:15:12.345 "max_namespaces": 32, 00:15:12.345 "min_cntlid": 1, 00:15:12.345 "max_cntlid": 65519, 00:15:12.345 "namespaces": [ 00:15:12.345 { 00:15:12.345 "nsid": 1, 00:15:12.345 "bdev_name": "Malloc2", 00:15:12.345 "name": "Malloc2", 00:15:12.345 "nguid": "76C3E51DE1D94D3EB4A4FD44522EEC9A", 00:15:12.345 "uuid": "76c3e51d-e1d9-4d3e-b4a4-fd44522eec9a" 00:15:12.345 } 00:15:12.345 ] 00:15:12.345 } 00:15:12.345 ] 00:15:12.345 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1042463 00:15:12.345 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:12.345 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:12.345 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:12.345 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:12.345 [2024-12-08 06:19:02.251607] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:15:12.345 [2024-12-08 06:19:02.251642] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042597 ] 00:15:12.345 [2024-12-08 06:19:02.301430] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:12.345 [2024-12-08 06:19:02.303771] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:12.345 [2024-12-08 06:19:02.303806] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f708115c000 00:15:12.345 [2024-12-08 06:19:02.304760] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.345 [2024-12-08 06:19:02.305764] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.345 [2024-12-08 06:19:02.306772] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.345 [2024-12-08 06:19:02.307786] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:12.345 [2024-12-08 06:19:02.308792] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:12.345 [2024-12-08 06:19:02.309801] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.345 [2024-12-08 06:19:02.310811] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:12.345 [2024-12-08 06:19:02.311819] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.345 [2024-12-08 06:19:02.312823] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:12.345 [2024-12-08 06:19:02.312846] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7081151000 00:15:12.345 [2024-12-08 06:19:02.313966] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:12.345 [2024-12-08 06:19:02.329333] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:12.345 [2024-12-08 06:19:02.329375] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:12.345 [2024-12-08 06:19:02.334480] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:12.345 [2024-12-08 06:19:02.334536] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:12.345 [2024-12-08 06:19:02.334630] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:12.345 [2024-12-08 06:19:02.334654] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:12.345 [2024-12-08 06:19:02.334665] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:12.345 [2024-12-08 06:19:02.335487] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:12.345 [2024-12-08 06:19:02.335513] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:12.345 [2024-12-08 06:19:02.335528] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:12.345 [2024-12-08 06:19:02.336494] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:12.345 [2024-12-08 06:19:02.336515] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:12.345 [2024-12-08 06:19:02.336529] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:12.345 [2024-12-08 06:19:02.337501] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:12.345 [2024-12-08 06:19:02.337522] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:12.345 [2024-12-08 06:19:02.338501] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:12.345 [2024-12-08 06:19:02.338522] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:12.345 [2024-12-08 06:19:02.338531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:12.345 [2024-12-08 06:19:02.338542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:12.345 [2024-12-08 06:19:02.338651] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:12.345 [2024-12-08 06:19:02.338659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:12.345 [2024-12-08 06:19:02.338667] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:12.345 [2024-12-08 06:19:02.339510] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:12.345 [2024-12-08 06:19:02.340512] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:12.345 [2024-12-08 06:19:02.341522] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:12.345 [2024-12-08 06:19:02.342520] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:12.345 [2024-12-08 06:19:02.342596] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:12.345 [2024-12-08 06:19:02.343540] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:12.345 [2024-12-08 06:19:02.343560] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:12.345 [2024-12-08 06:19:02.343569] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:12.345 [2024-12-08 06:19:02.343593] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:12.345 [2024-12-08 06:19:02.343607] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:12.345 [2024-12-08 06:19:02.343631] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:12.345 [2024-12-08 06:19:02.343640] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:12.345 [2024-12-08 06:19:02.343646] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.345 [2024-12-08 06:19:02.343663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.345 [2024-12-08 06:19:02.351736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:12.345 [2024-12-08 06:19:02.351765] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:12.345 [2024-12-08 06:19:02.351776] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:12.345 [2024-12-08 06:19:02.351783] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:12.345 [2024-12-08 06:19:02.351795] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:12.345 [2024-12-08 06:19:02.351804] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:12.345 [2024-12-08 06:19:02.351812] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:12.345 [2024-12-08 06:19:02.351820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:12.345 [2024-12-08 06:19:02.351833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:12.345 [2024-12-08 06:19:02.351848] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:12.345 [2024-12-08 06:19:02.359734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:12.345 [2024-12-08 06:19:02.359758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.345 [2024-12-08 06:19:02.359771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.345 [2024-12-08 06:19:02.359783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.345 [2024-12-08 06:19:02.359795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.345 [2024-12-08 06:19:02.359803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:12.346 [2024-12-08 06:19:02.359821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:12.346 [2024-12-08 06:19:02.359836] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:12.346 [2024-12-08 06:19:02.367735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:12.346 [2024-12-08 06:19:02.367754] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:12.346 [2024-12-08 06:19:02.367763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:12.346 [2024-12-08 06:19:02.367775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:12.346 [2024-12-08 06:19:02.367785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:12.346 [2024-12-08 06:19:02.367798] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:12.346 [2024-12-08 06:19:02.375734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:12.346 [2024-12-08 06:19:02.375811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:12.346 [2024-12-08 06:19:02.375829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:12.346 [2024-12-08 06:19:02.375842] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:12.346 [2024-12-08 06:19:02.375857] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:12.346 [2024-12-08 06:19:02.375863] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.346 [2024-12-08 06:19:02.375873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:12.346 [2024-12-08 06:19:02.383735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:12.346 [2024-12-08 06:19:02.383764] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:12.346 [2024-12-08 06:19:02.383783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:12.346 [2024-12-08 06:19:02.383799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:12.346 [2024-12-08 06:19:02.383811] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:12.346 [2024-12-08 06:19:02.383819] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:12.346 [2024-12-08 06:19:02.383825] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.346 [2024-12-08 06:19:02.383835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.346 [2024-12-08 06:19:02.391749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:12.346 [2024-12-08 06:19:02.391779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:12.346 [2024-12-08 06:19:02.391796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:12.346 [2024-12-08 06:19:02.391810] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:12.346 [2024-12-08 06:19:02.391818] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:12.346 [2024-12-08 06:19:02.391824] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.346 [2024-12-08 06:19:02.391833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.346 [2024-12-08 06:19:02.399731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:12.346 [2024-12-08 06:19:02.399752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:12.346 [2024-12-08 06:19:02.399765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:12.346 [2024-12-08 06:19:02.399780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:12.346 [2024-12-08 06:19:02.399794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:12.346 [2024-12-08 06:19:02.399803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:12.346 [2024-12-08 06:19:02.399811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:12.346 [2024-12-08 06:19:02.399820] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:12.346 [2024-12-08 06:19:02.399831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:12.346 [2024-12-08 06:19:02.399840] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:12.346 [2024-12-08 06:19:02.399865] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:12.346 [2024-12-08 06:19:02.407739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:12.346 [2024-12-08 06:19:02.407765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:12.346 [2024-12-08 06:19:02.415733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:12.346 [2024-12-08 06:19:02.415759] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:12.346 [2024-12-08 06:19:02.423732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:12.346 [2024-12-08 06:19:02.423758] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:12.346 [2024-12-08 06:19:02.431733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:12.346 [2024-12-08 06:19:02.431765] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:12.346 [2024-12-08 06:19:02.431777] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:12.346 [2024-12-08 06:19:02.431783] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:12.346 [2024-12-08 06:19:02.431789] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:12.346 [2024-12-08 06:19:02.431795] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:12.346 [2024-12-08 06:19:02.431804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:12.346 [2024-12-08 06:19:02.431817] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:12.346 [2024-12-08 06:19:02.431825] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:12.346 [2024-12-08 06:19:02.431831] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.346 [2024-12-08 06:19:02.431839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:12.346 [2024-12-08 06:19:02.431850] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:12.346 [2024-12-08 06:19:02.431858] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:12.346 [2024-12-08 06:19:02.431864] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.346 [2024-12-08 06:19:02.431873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.346 [2024-12-08 06:19:02.431885] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:12.346 [2024-12-08 06:19:02.431892] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:12.346 [2024-12-08 06:19:02.431898] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.346 [2024-12-08 06:19:02.431907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:12.346 [2024-12-08 06:19:02.439735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:12.346 [2024-12-08 06:19:02.439764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:12.346 [2024-12-08 06:19:02.439783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:12.346 [2024-12-08 06:19:02.439796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:12.346 ===================================================== 00:15:12.346 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:12.346 ===================================================== 00:15:12.346 Controller Capabilities/Features 00:15:12.346 ================================ 00:15:12.346 Vendor ID: 4e58 00:15:12.346 Subsystem Vendor ID: 4e58 00:15:12.346 Serial Number: SPDK2 00:15:12.346 Model Number: SPDK bdev Controller 00:15:12.346 Firmware Version: 25.01 00:15:12.346 Recommended Arb Burst: 6 00:15:12.346 IEEE OUI Identifier: 8d 6b 50 00:15:12.346 Multi-path I/O 00:15:12.346 May have multiple subsystem ports: Yes 00:15:12.346 May have multiple controllers: Yes 00:15:12.346 Associated with SR-IOV VF: No 00:15:12.346 Max Data Transfer Size: 131072 00:15:12.346 Max Number of Namespaces: 32 00:15:12.346 Max Number of I/O Queues: 127 00:15:12.346 NVMe Specification Version (VS): 1.3 00:15:12.346 NVMe Specification Version (Identify): 1.3 00:15:12.346 Maximum Queue Entries: 256 00:15:12.346 Contiguous Queues Required: Yes 00:15:12.346 Arbitration Mechanisms Supported 00:15:12.346 Weighted Round Robin: Not Supported 00:15:12.346 Vendor Specific: Not Supported 00:15:12.346 Reset Timeout: 15000 ms 00:15:12.346 Doorbell Stride: 4 bytes 00:15:12.346 NVM Subsystem Reset: Not Supported 00:15:12.346 Command Sets Supported 00:15:12.347 NVM Command Set: Supported 00:15:12.347 Boot Partition: Not Supported 00:15:12.347 Memory Page Size Minimum: 4096 bytes 00:15:12.347 Memory Page Size Maximum: 4096 bytes 00:15:12.347 Persistent Memory Region: Not Supported 00:15:12.347 Optional Asynchronous Events Supported 00:15:12.347 Namespace Attribute Notices: Supported 00:15:12.347 Firmware Activation Notices: Not Supported 00:15:12.347 ANA Change Notices: Not Supported 00:15:12.347 PLE Aggregate Log Change Notices: Not Supported 00:15:12.347 LBA Status Info Alert Notices: Not Supported 00:15:12.347 EGE Aggregate Log Change Notices: Not Supported 00:15:12.347 Normal NVM Subsystem Shutdown event: Not Supported 00:15:12.347 Zone Descriptor Change Notices: Not Supported 00:15:12.347 Discovery Log Change Notices: Not Supported 00:15:12.347 Controller Attributes 00:15:12.347 128-bit Host Identifier: Supported 00:15:12.347 Non-Operational Permissive Mode: Not Supported 00:15:12.347 NVM Sets: Not Supported 00:15:12.347 Read Recovery Levels: Not Supported 00:15:12.347 Endurance Groups: Not Supported 00:15:12.347 Predictable Latency Mode: Not Supported 00:15:12.347 Traffic Based Keep ALive: Not Supported 00:15:12.347 Namespace Granularity: Not Supported 00:15:12.347 SQ Associations: Not Supported 00:15:12.347 UUID List: Not Supported 00:15:12.347 Multi-Domain Subsystem: Not Supported 00:15:12.347 Fixed Capacity Management: Not Supported 00:15:12.347 Variable Capacity Management: Not Supported 00:15:12.347 Delete Endurance Group: Not Supported 00:15:12.347 Delete NVM Set: Not Supported 00:15:12.347 Extended LBA Formats Supported: Not Supported 00:15:12.347 Flexible Data Placement Supported: Not Supported 00:15:12.347 00:15:12.347 Controller Memory Buffer Support 00:15:12.347 ================================ 00:15:12.347 Supported: No 00:15:12.347 00:15:12.347 Persistent Memory Region Support 00:15:12.347 ================================ 00:15:12.347 Supported: No 00:15:12.347 00:15:12.347 Admin Command Set Attributes 00:15:12.347 ============================ 00:15:12.347 Security Send/Receive: Not Supported 00:15:12.347 Format NVM: Not Supported 00:15:12.347 Firmware Activate/Download: Not Supported 00:15:12.347 Namespace Management: Not Supported 00:15:12.347 Device Self-Test: Not Supported 00:15:12.347 Directives: Not Supported 00:15:12.347 NVMe-MI: Not Supported 00:15:12.347 Virtualization Management: Not Supported 00:15:12.347 Doorbell Buffer Config: Not Supported 00:15:12.347 Get LBA Status Capability: Not Supported 00:15:12.347 Command & Feature Lockdown Capability: Not Supported 00:15:12.347 Abort Command Limit: 4 00:15:12.347 Async Event Request Limit: 4 00:15:12.347 Number of Firmware Slots: N/A 00:15:12.347 Firmware Slot 1 Read-Only: N/A 00:15:12.347 Firmware Activation Without Reset: N/A 00:15:12.347 Multiple Update Detection Support: N/A 00:15:12.347 Firmware Update Granularity: No Information Provided 00:15:12.347 Per-Namespace SMART Log: No 00:15:12.347 Asymmetric Namespace Access Log Page: Not Supported 00:15:12.347 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:12.347 Command Effects Log Page: Supported 00:15:12.347 Get Log Page Extended Data: Supported 00:15:12.347 Telemetry Log Pages: Not Supported 00:15:12.347 Persistent Event Log Pages: Not Supported 00:15:12.347 Supported Log Pages Log Page: May Support 00:15:12.347 Commands Supported & Effects Log Page: Not Supported 00:15:12.347 Feature Identifiers & Effects Log Page:May Support 00:15:12.347 NVMe-MI Commands & Effects Log Page: May Support 00:15:12.347 Data Area 4 for Telemetry Log: Not Supported 00:15:12.347 Error Log Page Entries Supported: 128 00:15:12.347 Keep Alive: Supported 00:15:12.347 Keep Alive Granularity: 10000 ms 00:15:12.347 00:15:12.347 NVM Command Set Attributes 00:15:12.347 ========================== 00:15:12.347 Submission Queue Entry Size 00:15:12.347 Max: 64 00:15:12.347 Min: 64 00:15:12.347 Completion Queue Entry Size 00:15:12.347 Max: 16 00:15:12.347 Min: 16 00:15:12.347 Number of Namespaces: 32 00:15:12.347 Compare Command: Supported 00:15:12.347 Write Uncorrectable Command: Not Supported 00:15:12.347 Dataset Management Command: Supported 00:15:12.347 Write Zeroes Command: Supported 00:15:12.347 Set Features Save Field: Not Supported 00:15:12.347 Reservations: Not Supported 00:15:12.347 Timestamp: Not Supported 00:15:12.347 Copy: Supported 00:15:12.347 Volatile Write Cache: Present 00:15:12.347 Atomic Write Unit (Normal): 1 00:15:12.347 Atomic Write Unit (PFail): 1 00:15:12.347 Atomic Compare & Write Unit: 1 00:15:12.347 Fused Compare & Write: Supported 00:15:12.347 Scatter-Gather List 00:15:12.347 SGL Command Set: Supported (Dword aligned) 00:15:12.347 SGL Keyed: Not Supported 00:15:12.347 SGL Bit Bucket Descriptor: Not Supported 00:15:12.347 SGL Metadata Pointer: Not Supported 00:15:12.347 Oversized SGL: Not Supported 00:15:12.347 SGL Metadata Address: Not Supported 00:15:12.347 SGL Offset: Not Supported 00:15:12.347 Transport SGL Data Block: Not Supported 00:15:12.347 Replay Protected Memory Block: Not Supported 00:15:12.347 00:15:12.347 Firmware Slot Information 00:15:12.347 ========================= 00:15:12.347 Active slot: 1 00:15:12.347 Slot 1 Firmware Revision: 25.01 00:15:12.347 00:15:12.347 00:15:12.347 Commands Supported and Effects 00:15:12.347 ============================== 00:15:12.347 Admin Commands 00:15:12.347 -------------- 00:15:12.347 Get Log Page (02h): Supported 00:15:12.347 Identify (06h): Supported 00:15:12.347 Abort (08h): Supported 00:15:12.347 Set Features (09h): Supported 00:15:12.347 Get Features (0Ah): Supported 00:15:12.347 Asynchronous Event Request (0Ch): Supported 00:15:12.347 Keep Alive (18h): Supported 00:15:12.347 I/O Commands 00:15:12.347 ------------ 00:15:12.347 Flush (00h): Supported LBA-Change 00:15:12.347 Write (01h): Supported LBA-Change 00:15:12.347 Read (02h): Supported 00:15:12.347 Compare (05h): Supported 00:15:12.347 Write Zeroes (08h): Supported LBA-Change 00:15:12.347 Dataset Management (09h): Supported LBA-Change 00:15:12.347 Copy (19h): Supported LBA-Change 00:15:12.347 00:15:12.347 Error Log 00:15:12.347 ========= 00:15:12.347 00:15:12.347 Arbitration 00:15:12.347 =========== 00:15:12.347 Arbitration Burst: 1 00:15:12.347 00:15:12.347 Power Management 00:15:12.347 ================ 00:15:12.347 Number of Power States: 1 00:15:12.347 Current Power State: Power State #0 00:15:12.347 Power State #0: 00:15:12.347 Max Power: 0.00 W 00:15:12.347 Non-Operational State: Operational 00:15:12.347 Entry Latency: Not Reported 00:15:12.347 Exit Latency: Not Reported 00:15:12.347 Relative Read Throughput: 0 00:15:12.347 Relative Read Latency: 0 00:15:12.347 Relative Write Throughput: 0 00:15:12.347 Relative Write Latency: 0 00:15:12.347 Idle Power: Not Reported 00:15:12.347 Active Power: Not Reported 00:15:12.347 Non-Operational Permissive Mode: Not Supported 00:15:12.347 00:15:12.347 Health Information 00:15:12.347 ================== 00:15:12.347 Critical Warnings: 00:15:12.347 Available Spare Space: OK 00:15:12.347 Temperature: OK 00:15:12.347 Device Reliability: OK 00:15:12.347 Read Only: No 00:15:12.347 Volatile Memory Backup: OK 00:15:12.347 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:12.347 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:12.347 Available Spare: 0% 00:15:12.347 Available Sp[2024-12-08 06:19:02.439924] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:12.347 [2024-12-08 06:19:02.447737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:12.347 [2024-12-08 06:19:02.447791] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:12.347 [2024-12-08 06:19:02.447809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.347 [2024-12-08 06:19:02.447821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.347 [2024-12-08 06:19:02.447831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.347 [2024-12-08 06:19:02.447840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.347 [2024-12-08 06:19:02.447927] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:12.347 [2024-12-08 06:19:02.447949] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:12.347 [2024-12-08 06:19:02.448930] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:12.347 [2024-12-08 06:19:02.449002] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:12.347 [2024-12-08 06:19:02.449017] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:12.348 [2024-12-08 06:19:02.449934] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:12.348 [2024-12-08 06:19:02.449960] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:12.348 [2024-12-08 06:19:02.450027] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:12.348 [2024-12-08 06:19:02.451241] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:12.606 are Threshold: 0% 00:15:12.606 Life Percentage Used: 0% 00:15:12.606 Data Units Read: 0 00:15:12.606 Data Units Written: 0 00:15:12.606 Host Read Commands: 0 00:15:12.606 Host Write Commands: 0 00:15:12.606 Controller Busy Time: 0 minutes 00:15:12.606 Power Cycles: 0 00:15:12.606 Power On Hours: 0 hours 00:15:12.606 Unsafe Shutdowns: 0 00:15:12.606 Unrecoverable Media Errors: 0 00:15:12.606 Lifetime Error Log Entries: 0 00:15:12.606 Warning Temperature Time: 0 minutes 00:15:12.606 Critical Temperature Time: 0 minutes 00:15:12.606 00:15:12.606 Number of Queues 00:15:12.606 ================ 00:15:12.606 Number of I/O Submission Queues: 127 00:15:12.606 Number of I/O Completion Queues: 127 00:15:12.606 00:15:12.606 Active Namespaces 00:15:12.606 ================= 00:15:12.606 Namespace ID:1 00:15:12.606 Error Recovery Timeout: Unlimited 00:15:12.606 Command Set Identifier: NVM (00h) 00:15:12.606 Deallocate: Supported 00:15:12.606 Deallocated/Unwritten Error: Not Supported 00:15:12.606 Deallocated Read Value: Unknown 00:15:12.606 Deallocate in Write Zeroes: Not Supported 00:15:12.606 Deallocated Guard Field: 0xFFFF 00:15:12.606 Flush: Supported 00:15:12.606 Reservation: Supported 00:15:12.606 Namespace Sharing Capabilities: Multiple Controllers 00:15:12.607 Size (in LBAs): 131072 (0GiB) 00:15:12.607 Capacity (in LBAs): 131072 (0GiB) 00:15:12.607 Utilization (in LBAs): 131072 (0GiB) 00:15:12.607 NGUID: 76C3E51DE1D94D3EB4A4FD44522EEC9A 00:15:12.607 UUID: 76c3e51d-e1d9-4d3e-b4a4-fd44522eec9a 00:15:12.607 Thin Provisioning: Not Supported 00:15:12.607 Per-NS Atomic Units: Yes 00:15:12.607 Atomic Boundary Size (Normal): 0 00:15:12.607 Atomic Boundary Size (PFail): 0 00:15:12.607 Atomic Boundary Offset: 0 00:15:12.607 Maximum Single Source Range Length: 65535 00:15:12.607 Maximum Copy Length: 65535 00:15:12.607 Maximum Source Range Count: 1 00:15:12.607 NGUID/EUI64 Never Reused: No 00:15:12.607 Namespace Write Protected: No 00:15:12.607 Number of LBA Formats: 1 00:15:12.607 Current LBA Format: LBA Format #00 00:15:12.607 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:12.607 00:15:12.607 06:19:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:12.607 [2024-12-08 06:19:02.689503] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.889 Initializing NVMe Controllers 00:15:17.889 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:17.889 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:17.889 Initialization complete. Launching workers. 00:15:17.889 ======================================================== 00:15:17.889 Latency(us) 00:15:17.889 Device Information : IOPS MiB/s Average min max 00:15:17.889 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31701.61 123.83 4036.49 1201.43 9002.84 00:15:17.889 ======================================================== 00:15:17.889 Total : 31701.61 123.83 4036.49 1201.43 9002.84 00:15:17.889 00:15:17.889 [2024-12-08 06:19:07.795079] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.889 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:18.148 [2024-12-08 06:19:08.042787] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:23.423 Initializing NVMe Controllers 00:15:23.423 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:23.423 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:23.423 Initialization complete. Launching workers. 00:15:23.423 ======================================================== 00:15:23.423 Latency(us) 00:15:23.423 Device Information : IOPS MiB/s Average min max 00:15:23.423 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 29595.03 115.61 4324.63 1236.72 8405.34 00:15:23.423 ======================================================== 00:15:23.423 Total : 29595.03 115.61 4324.63 1236.72 8405.34 00:15:23.423 00:15:23.423 [2024-12-08 06:19:13.067793] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:23.423 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:23.423 [2024-12-08 06:19:13.299846] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:28.716 [2024-12-08 06:19:18.435885] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:28.716 Initializing NVMe Controllers 00:15:28.716 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:28.716 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:28.716 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:28.716 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:28.716 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:28.716 Initialization complete. Launching workers. 00:15:28.716 Starting thread on core 2 00:15:28.716 Starting thread on core 3 00:15:28.716 Starting thread on core 1 00:15:28.716 06:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:28.716 [2024-12-08 06:19:18.757235] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:32.025 [2024-12-08 06:19:21.938001] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:32.025 Initializing NVMe Controllers 00:15:32.025 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:32.025 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:32.025 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:32.025 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:32.025 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:32.025 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:32.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:32.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:32.025 Initialization complete. Launching workers. 00:15:32.025 Starting thread on core 1 with urgent priority queue 00:15:32.025 Starting thread on core 2 with urgent priority queue 00:15:32.025 Starting thread on core 3 with urgent priority queue 00:15:32.025 Starting thread on core 0 with urgent priority queue 00:15:32.025 SPDK bdev Controller (SPDK2 ) core 0: 3301.33 IO/s 30.29 secs/100000 ios 00:15:32.025 SPDK bdev Controller (SPDK2 ) core 1: 3940.00 IO/s 25.38 secs/100000 ios 00:15:32.025 SPDK bdev Controller (SPDK2 ) core 2: 3222.33 IO/s 31.03 secs/100000 ios 00:15:32.025 SPDK bdev Controller (SPDK2 ) core 3: 3777.00 IO/s 26.48 secs/100000 ios 00:15:32.025 ======================================================== 00:15:32.025 00:15:32.025 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:32.283 [2024-12-08 06:19:22.251216] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:32.283 Initializing NVMe Controllers 00:15:32.283 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:32.283 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:32.283 Namespace ID: 1 size: 0GB 00:15:32.283 Initialization complete. 00:15:32.283 INFO: using host memory buffer for IO 00:15:32.283 Hello world! 00:15:32.283 [2024-12-08 06:19:22.261374] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:32.283 06:19:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:32.543 [2024-12-08 06:19:22.563247] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:33.948 Initializing NVMe Controllers 00:15:33.948 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.948 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.948 Initialization complete. Launching workers. 00:15:33.948 submit (in ns) avg, min, max = 6526.1, 3485.6, 4005331.1 00:15:33.948 complete (in ns) avg, min, max = 26007.3, 2103.3, 4021570.0 00:15:33.948 00:15:33.948 Submit histogram 00:15:33.948 ================ 00:15:33.948 Range in us Cumulative Count 00:15:33.948 3.484 - 3.508: 0.0081% ( 1) 00:15:33.948 3.508 - 3.532: 0.4787% ( 58) 00:15:33.948 3.532 - 3.556: 1.8257% ( 166) 00:15:33.948 3.556 - 3.579: 4.9091% ( 380) 00:15:33.948 3.579 - 3.603: 10.3376% ( 669) 00:15:33.948 3.603 - 3.627: 19.7420% ( 1159) 00:15:33.948 3.627 - 3.650: 29.8036% ( 1240) 00:15:33.948 3.650 - 3.674: 38.7374% ( 1101) 00:15:33.948 3.674 - 3.698: 44.8880% ( 758) 00:15:33.948 3.698 - 3.721: 51.2496% ( 784) 00:15:33.948 3.721 - 3.745: 55.6394% ( 541) 00:15:33.948 3.745 - 3.769: 59.8913% ( 524) 00:15:33.948 3.769 - 3.793: 63.3723% ( 429) 00:15:33.948 3.793 - 3.816: 67.1211% ( 462) 00:15:33.948 3.816 - 3.840: 70.7238% ( 444) 00:15:33.948 3.840 - 3.864: 75.0649% ( 535) 00:15:33.948 3.864 - 3.887: 79.4628% ( 542) 00:15:33.948 3.887 - 3.911: 83.1061% ( 449) 00:15:33.948 3.911 - 3.935: 85.8488% ( 338) 00:15:33.948 3.935 - 3.959: 87.7231% ( 231) 00:15:33.948 3.959 - 3.982: 89.2811% ( 192) 00:15:33.948 3.982 - 4.006: 90.9932% ( 211) 00:15:33.948 4.006 - 4.030: 92.0724% ( 133) 00:15:33.948 4.030 - 4.053: 93.0542% ( 121) 00:15:33.948 4.053 - 4.077: 93.9630% ( 112) 00:15:33.948 4.077 - 4.101: 94.8069% ( 104) 00:15:33.948 4.101 - 4.124: 95.2856% ( 59) 00:15:33.948 4.124 - 4.148: 95.6589% ( 46) 00:15:33.948 4.148 - 4.172: 95.9591% ( 37) 00:15:33.948 4.172 - 4.196: 96.1944% ( 29) 00:15:33.948 4.196 - 4.219: 96.3810% ( 23) 00:15:33.948 4.219 - 4.243: 96.5677% ( 23) 00:15:33.948 4.243 - 4.267: 96.7218% ( 19) 00:15:33.948 4.267 - 4.290: 96.8111% ( 11) 00:15:33.948 4.290 - 4.314: 96.9815% ( 21) 00:15:33.948 4.314 - 4.338: 97.0302% ( 6) 00:15:33.948 4.338 - 4.361: 97.0951% ( 8) 00:15:33.948 4.361 - 4.385: 97.1600% ( 8) 00:15:33.948 4.385 - 4.409: 97.2249% ( 8) 00:15:33.948 4.409 - 4.433: 97.2817% ( 7) 00:15:33.948 4.433 - 4.456: 97.3142% ( 4) 00:15:33.948 4.480 - 4.504: 97.3385% ( 3) 00:15:33.948 4.504 - 4.527: 97.3548% ( 2) 00:15:33.948 4.575 - 4.599: 97.3629% ( 1) 00:15:33.948 4.599 - 4.622: 97.3710% ( 1) 00:15:33.948 4.622 - 4.646: 97.3872% ( 2) 00:15:33.948 4.646 - 4.670: 97.3953% ( 1) 00:15:33.948 4.717 - 4.741: 97.4034% ( 1) 00:15:33.948 4.741 - 4.764: 97.4116% ( 1) 00:15:33.948 4.764 - 4.788: 97.4278% ( 2) 00:15:33.948 4.788 - 4.812: 97.4602% ( 4) 00:15:33.948 4.812 - 4.836: 97.4684% ( 1) 00:15:33.948 4.836 - 4.859: 97.4765% ( 1) 00:15:33.948 4.859 - 4.883: 97.5333% ( 7) 00:15:33.948 4.883 - 4.907: 97.5901% ( 7) 00:15:33.948 4.907 - 4.930: 97.6712% ( 10) 00:15:33.948 4.930 - 4.954: 97.7361% ( 8) 00:15:33.948 4.954 - 4.978: 97.7848% ( 6) 00:15:33.948 4.978 - 5.001: 97.8254% ( 5) 00:15:33.948 5.001 - 5.025: 97.8903% ( 8) 00:15:33.948 5.025 - 5.049: 97.9633% ( 9) 00:15:33.948 5.049 - 5.073: 97.9877% ( 3) 00:15:33.948 5.073 - 5.096: 97.9958% ( 1) 00:15:33.948 5.096 - 5.120: 98.0282% ( 4) 00:15:33.948 5.120 - 5.144: 98.0607% ( 4) 00:15:33.948 5.144 - 5.167: 98.1094% ( 6) 00:15:33.948 5.167 - 5.191: 98.1824% ( 9) 00:15:33.948 5.191 - 5.215: 98.2068% ( 3) 00:15:33.948 5.215 - 5.239: 98.2473% ( 5) 00:15:33.948 5.239 - 5.262: 98.2554% ( 1) 00:15:33.948 5.262 - 5.286: 98.2717% ( 2) 00:15:33.948 5.333 - 5.357: 98.2960% ( 3) 00:15:33.948 5.381 - 5.404: 98.3041% ( 1) 00:15:33.948 5.404 - 5.428: 98.3122% ( 1) 00:15:33.948 5.523 - 5.547: 98.3204% ( 1) 00:15:33.948 5.547 - 5.570: 98.3285% ( 1) 00:15:33.948 5.665 - 5.689: 98.3366% ( 1) 00:15:33.948 5.689 - 5.713: 98.3447% ( 1) 00:15:33.948 5.760 - 5.784: 98.3528% ( 1) 00:15:33.948 5.879 - 5.902: 98.3690% ( 2) 00:15:33.948 6.447 - 6.495: 98.3772% ( 1) 00:15:33.948 6.590 - 6.637: 98.3853% ( 1) 00:15:33.948 6.874 - 6.921: 98.3934% ( 1) 00:15:33.948 7.206 - 7.253: 98.4096% ( 2) 00:15:33.948 7.253 - 7.301: 98.4258% ( 2) 00:15:33.948 7.301 - 7.348: 98.4340% ( 1) 00:15:33.948 7.348 - 7.396: 98.4421% ( 1) 00:15:33.948 7.396 - 7.443: 98.4502% ( 1) 00:15:33.948 7.680 - 7.727: 98.4583% ( 1) 00:15:33.948 7.727 - 7.775: 98.4664% ( 1) 00:15:33.948 7.822 - 7.870: 98.4745% ( 1) 00:15:33.948 7.870 - 7.917: 98.4826% ( 1) 00:15:33.948 7.964 - 8.012: 98.4907% ( 1) 00:15:33.948 8.012 - 8.059: 98.5070% ( 2) 00:15:33.948 8.059 - 8.107: 98.5394% ( 4) 00:15:33.948 8.201 - 8.249: 98.5638% ( 3) 00:15:33.948 8.344 - 8.391: 98.5881% ( 3) 00:15:33.948 8.391 - 8.439: 98.6043% ( 2) 00:15:33.948 8.439 - 8.486: 98.6125% ( 1) 00:15:33.948 8.533 - 8.581: 98.6206% ( 1) 00:15:33.948 8.628 - 8.676: 98.6368% ( 2) 00:15:33.948 8.913 - 8.960: 98.6449% ( 1) 00:15:33.948 9.197 - 9.244: 98.6530% ( 1) 00:15:33.948 9.244 - 9.292: 98.6611% ( 1) 00:15:33.948 9.339 - 9.387: 98.6774% ( 2) 00:15:33.948 9.434 - 9.481: 98.7017% ( 3) 00:15:33.948 9.529 - 9.576: 98.7098% ( 1) 00:15:33.948 9.719 - 9.766: 98.7261% ( 2) 00:15:33.948 9.861 - 9.908: 98.7342% ( 1) 00:15:33.948 9.908 - 9.956: 98.7423% ( 1) 00:15:33.948 10.050 - 10.098: 98.7504% ( 1) 00:15:33.948 10.145 - 10.193: 98.7585% ( 1) 00:15:33.948 10.240 - 10.287: 98.7747% ( 2) 00:15:33.948 10.287 - 10.335: 98.7829% ( 1) 00:15:33.948 10.382 - 10.430: 98.7910% ( 1) 00:15:33.948 10.619 - 10.667: 98.7991% ( 1) 00:15:33.948 10.714 - 10.761: 98.8072% ( 1) 00:15:33.948 10.761 - 10.809: 98.8153% ( 1) 00:15:33.948 11.046 - 11.093: 98.8315% ( 2) 00:15:33.948 11.188 - 11.236: 98.8478% ( 2) 00:15:33.948 11.283 - 11.330: 98.8559% ( 1) 00:15:33.948 11.378 - 11.425: 98.8640% ( 1) 00:15:33.948 11.520 - 11.567: 98.8721% ( 1) 00:15:33.948 11.757 - 11.804: 98.8802% ( 1) 00:15:33.948 11.804 - 11.852: 98.8883% ( 1) 00:15:33.948 11.899 - 11.947: 98.8965% ( 1) 00:15:33.948 11.994 - 12.041: 98.9046% ( 1) 00:15:33.948 12.041 - 12.089: 98.9127% ( 1) 00:15:33.948 12.136 - 12.231: 98.9208% ( 1) 00:15:33.948 12.231 - 12.326: 98.9289% ( 1) 00:15:33.948 12.326 - 12.421: 98.9370% ( 1) 00:15:33.948 12.421 - 12.516: 98.9451% ( 1) 00:15:33.948 12.705 - 12.800: 98.9533% ( 1) 00:15:33.948 12.800 - 12.895: 98.9614% ( 1) 00:15:33.948 13.274 - 13.369: 98.9776% ( 2) 00:15:33.948 13.559 - 13.653: 98.9857% ( 1) 00:15:33.948 13.843 - 13.938: 98.9938% ( 1) 00:15:33.948 14.033 - 14.127: 99.0019% ( 1) 00:15:33.948 14.222 - 14.317: 99.0182% ( 2) 00:15:33.948 14.317 - 14.412: 99.0263% ( 1) 00:15:33.948 14.696 - 14.791: 99.0344% ( 1) 00:15:33.948 14.791 - 14.886: 99.0425% ( 1) 00:15:33.948 17.067 - 17.161: 99.0506% ( 1) 00:15:33.948 17.161 - 17.256: 99.0587% ( 1) 00:15:33.948 17.256 - 17.351: 99.0669% ( 1) 00:15:33.948 17.446 - 17.541: 99.0750% ( 1) 00:15:33.948 17.541 - 17.636: 99.0993% ( 3) 00:15:33.948 17.636 - 17.730: 99.1074% ( 1) 00:15:33.948 17.730 - 17.825: 99.1480% ( 5) 00:15:33.948 17.825 - 17.920: 99.2291% ( 10) 00:15:33.948 17.920 - 18.015: 99.2859% ( 7) 00:15:33.948 18.015 - 18.110: 99.3184% ( 4) 00:15:33.948 18.110 - 18.204: 99.3914% ( 9) 00:15:33.948 18.204 - 18.299: 99.5050% ( 14) 00:15:33.948 18.299 - 18.394: 99.5456% ( 5) 00:15:33.949 18.394 - 18.489: 99.6105% ( 8) 00:15:33.949 18.489 - 18.584: 99.6592% ( 6) 00:15:33.949 18.584 - 18.679: 99.6998% ( 5) 00:15:33.949 18.679 - 18.773: 99.7403% ( 5) 00:15:33.949 18.773 - 18.868: 99.7728% ( 4) 00:15:33.949 18.963 - 19.058: 99.7971% ( 3) 00:15:33.949 19.058 - 19.153: 99.8134% ( 2) 00:15:33.949 19.153 - 19.247: 99.8215% ( 1) 00:15:33.949 19.247 - 19.342: 99.8296% ( 1) 00:15:33.949 19.816 - 19.911: 99.8377% ( 1) 00:15:33.949 19.911 - 20.006: 99.8458% ( 1) 00:15:33.949 20.290 - 20.385: 99.8539% ( 1) 00:15:33.949 20.385 - 20.480: 99.8621% ( 1) 00:15:33.949 20.764 - 20.859: 99.8702% ( 1) 00:15:33.949 20.954 - 21.049: 99.8783% ( 1) 00:15:33.949 21.049 - 21.144: 99.8864% ( 1) 00:15:33.949 22.281 - 22.376: 99.8945% ( 1) 00:15:33.949 22.661 - 22.756: 99.9026% ( 1) 00:15:33.949 23.324 - 23.419: 99.9107% ( 1) 00:15:33.949 23.988 - 24.083: 99.9189% ( 1) 00:15:33.949 24.273 - 24.462: 99.9270% ( 1) 00:15:33.949 24.841 - 25.031: 99.9351% ( 1) 00:15:33.949 3835.070 - 3859.342: 99.9432% ( 1) 00:15:33.949 3980.705 - 4004.978: 99.9919% ( 6) 00:15:33.949 4004.978 - 4029.250: 100.0000% ( 1) 00:15:33.949 00:15:33.949 Complete histogram 00:15:33.949 ================== 00:15:33.949 Range in us Cumulative Count 00:15:33.949 2.098 - 2.110: 0.4706% ( 58) 00:15:33.949 2.110 - 2.121: 7.1730% ( 826) 00:15:33.949 2.121 - 2.133: 15.8715% ( 1072) 00:15:33.949 2.133 - 2.145: 35.2970% ( 2394) 00:15:33.949 2.145 - 2.157: 42.5511% ( 894) 00:15:33.949 2.157 - 2.169: 53.8299% ( 1390) 00:15:33.949 2.169 - 2.181: 62.3255% ( 1047) 00:15:33.949 2.181 - 2.193: 64.9789% ( 327) 00:15:33.949 2.193 - 2.204: 68.6465% ( 452) 00:15:33.949 2.204 - 2.216: 74.3103% ( 698) 00:15:33.949 2.216 - 2.228: 76.5011% ( 270) 00:15:33.949 2.228 - 2.240: 80.4447% ( 486) 00:15:33.949 2.240 - 2.252: 83.2928% ( 351) 00:15:33.949 2.252 - 2.264: 84.4774% ( 146) 00:15:33.949 2.264 - 2.276: 86.1327% ( 204) 00:15:33.949 2.276 - 2.287: 89.5326% ( 419) 00:15:33.949 2.287 - 2.299: 91.3827% ( 228) 00:15:33.949 2.299 - 2.311: 92.5998% ( 150) 00:15:33.949 2.311 - 2.323: 93.8088% ( 149) 00:15:33.949 2.323 - 2.335: 94.1172% ( 38) 00:15:33.949 2.335 - 2.347: 94.3768% ( 32) 00:15:33.949 2.347 - 2.359: 94.8069% ( 53) 00:15:33.949 2.359 - 2.370: 95.2045% ( 49) 00:15:33.949 2.370 - 2.382: 95.3424% ( 17) 00:15:33.949 2.382 - 2.394: 95.3911% ( 6) 00:15:33.949 2.394 - 2.406: 95.4641% ( 9) 00:15:33.949 2.406 - 2.418: 95.5453% ( 10) 00:15:33.949 2.418 - 2.430: 95.7481% ( 25) 00:15:33.949 2.430 - 2.441: 96.1620% ( 51) 00:15:33.949 2.441 - 2.453: 96.4622% ( 37) 00:15:33.949 2.453 - 2.465: 96.8030% ( 42) 00:15:33.949 2.465 - 2.477: 97.0140% ( 26) 00:15:33.949 2.477 - 2.489: 97.2655% ( 31) 00:15:33.949 2.489 - 2.501: 97.4846% ( 27) 00:15:33.949 2.501 - 2.513: 97.6469% ( 20) 00:15:33.949 2.513 - 2.524: 97.7848% ( 17) 00:15:33.949 2.524 - 2.536: 97.8903% ( 13) 00:15:33.949 2.536 - 2.548: 97.9877% ( 12) 00:15:33.949 2.548 - 2.560: 98.0932% ( 13) 00:15:33.949 2.560 - 2.572: 98.1256% ( 4) 00:15:33.949 2.572 - 2.584: 98.1581% ( 4) 00:15:33.949 2.584 - 2.596: 98.1905% ( 4) 00:15:33.949 2.619 - 2.631: 98.2149% ( 3) 00:15:33.949 2.631 - 2.643: 98.2230% ( 1) 00:15:33.949 2.643 - 2.655: 98.2311% ( 1) 00:15:33.949 2.655 - 2.667: 98.2392% ( 1) 00:15:33.949 2.690 - 2.702: 98.2554% ( 2) 00:15:33.949 2.750 - 2.761: 98.2636% ( 1) 00:15:33.949 2.761 - 2.773: 98.2798% ( 2) 00:15:33.949 2.844 - 2.856: 98.2879% ( 1) 00:15:33.949 2.856 - 2.868: 98.2960% ( 1) 00:15:33.949 2.892 - 2.904: 98.3041% ( 1) 00:15:33.949 3.390 - 3.413: 98.3122% ( 1) 00:15:33.949 3.437 - 3.461: 98.3204% ( 1) 00:15:33.949 3.461 - 3.484: 98.3285% ( 1) 00:15:33.949 3.484 - 3.508: 98.3366% ( 1) 00:15:33.949 3.508 - 3.532: 98.3609% ( 3) 00:15:33.949 3.532 - 3.556: 98.3690% ( 1) 00:15:33.949 3.556 - 3.579: 98.3772% ( 1) 00:15:33.949 3.579 - 3.603: 98.4015% ( 3) 00:15:33.949 3.603 - 3.627: 98.4177% ( 2) 00:15:33.949 3.627 - 3.650: 98.4258% ( 1) 00:15:33.949 3.674 - 3.698: 98.4340% ( 1) 00:15:33.949 3.698 - 3.721: 98.4502% ( 2) 00:15:33.949 3.721 - 3.745: 98.4583% ( 1) 00:15:33.949 3.769 - 3.793: 98.4664% ( 1) 00:15:33.949 3.816 - 3.840: 98.4745% ( 1) 00:15:33.949 3.840 - 3.864: 98.4826% ( 1) 00:15:33.949 3.864 - 3.887: 98.5070% ( 3) 00:15:33.949 3.887 - 3.911: 98.5232% ( 2) 00:15:33.949 3.935 - 3.959: 98.5313% ( 1) 00:15:33.949 3.959 - 3.982: 9[2024-12-08 06:19:23.664544] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.949 8.5394% ( 1) 00:15:33.949 4.030 - 4.053: 98.5475% ( 1) 00:15:33.949 4.053 - 4.077: 98.5557% ( 1) 00:15:33.949 4.101 - 4.124: 98.5638% ( 1) 00:15:33.949 4.148 - 4.172: 98.5800% ( 2) 00:15:33.949 4.361 - 4.385: 98.5962% ( 2) 00:15:33.949 5.760 - 5.784: 98.6125% ( 2) 00:15:33.949 6.116 - 6.163: 98.6206% ( 1) 00:15:33.949 6.163 - 6.210: 98.6287% ( 1) 00:15:33.949 6.210 - 6.258: 98.6368% ( 1) 00:15:33.949 6.353 - 6.400: 98.6449% ( 1) 00:15:33.949 6.400 - 6.447: 98.6530% ( 1) 00:15:33.949 6.542 - 6.590: 98.6611% ( 1) 00:15:33.949 6.637 - 6.684: 98.6774% ( 2) 00:15:33.949 6.779 - 6.827: 98.6855% ( 1) 00:15:33.949 6.827 - 6.874: 98.6936% ( 1) 00:15:33.949 6.874 - 6.921: 98.7017% ( 1) 00:15:33.949 7.016 - 7.064: 98.7098% ( 1) 00:15:33.949 7.206 - 7.253: 98.7179% ( 1) 00:15:33.949 7.490 - 7.538: 98.7261% ( 1) 00:15:33.949 7.538 - 7.585: 98.7423% ( 2) 00:15:33.949 7.680 - 7.727: 98.7504% ( 1) 00:15:33.949 7.727 - 7.775: 98.7585% ( 1) 00:15:33.949 7.775 - 7.822: 98.7666% ( 1) 00:15:33.949 8.154 - 8.201: 98.7747% ( 1) 00:15:33.949 8.486 - 8.533: 98.7829% ( 1) 00:15:33.949 8.818 - 8.865: 98.7910% ( 1) 00:15:33.949 10.430 - 10.477: 98.7991% ( 1) 00:15:33.949 15.550 - 15.644: 98.8072% ( 1) 00:15:33.949 15.739 - 15.834: 98.8234% ( 2) 00:15:33.949 15.834 - 15.929: 98.8640% ( 5) 00:15:33.949 15.929 - 16.024: 98.8802% ( 2) 00:15:33.949 16.024 - 16.119: 98.9046% ( 3) 00:15:33.949 16.119 - 16.213: 98.9208% ( 2) 00:15:33.949 16.213 - 16.308: 98.9289% ( 1) 00:15:33.949 16.308 - 16.403: 98.9857% ( 7) 00:15:33.949 16.403 - 16.498: 99.0263% ( 5) 00:15:33.949 16.498 - 16.593: 99.0912% ( 8) 00:15:33.949 16.593 - 16.687: 99.1399% ( 6) 00:15:33.949 16.687 - 16.782: 99.1723% ( 4) 00:15:33.949 16.782 - 16.877: 99.1805% ( 1) 00:15:33.949 16.877 - 16.972: 99.2129% ( 4) 00:15:33.949 16.972 - 17.067: 99.2535% ( 5) 00:15:33.949 17.067 - 17.161: 99.2616% ( 1) 00:15:33.949 17.161 - 17.256: 99.2778% ( 2) 00:15:33.949 17.256 - 17.351: 99.2859% ( 1) 00:15:33.949 17.730 - 17.825: 99.2941% ( 1) 00:15:33.949 17.825 - 17.920: 99.3022% ( 1) 00:15:33.949 17.920 - 18.015: 99.3103% ( 1) 00:15:33.949 18.110 - 18.204: 99.3427% ( 4) 00:15:33.949 18.299 - 18.394: 99.3509% ( 1) 00:15:33.949 18.489 - 18.584: 99.3590% ( 1) 00:15:33.949 18.773 - 18.868: 99.3671% ( 1) 00:15:33.949 18.868 - 18.963: 99.3752% ( 1) 00:15:33.949 19.153 - 19.247: 99.3833% ( 1) 00:15:33.949 21.144 - 21.239: 99.3914% ( 1) 00:15:33.949 22.376 - 22.471: 99.3995% ( 1) 00:15:33.949 26.927 - 27.117: 99.4077% ( 1) 00:15:33.949 3980.705 - 4004.978: 99.8053% ( 49) 00:15:33.949 4004.978 - 4029.250: 100.0000% ( 24) 00:15:33.949 00:15:33.949 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:33.949 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:33.949 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:33.949 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:33.949 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:33.949 [ 00:15:33.949 { 00:15:33.949 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:33.949 "subtype": "Discovery", 00:15:33.949 "listen_addresses": [], 00:15:33.949 "allow_any_host": true, 00:15:33.949 "hosts": [] 00:15:33.949 }, 00:15:33.949 { 00:15:33.949 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:33.949 "subtype": "NVMe", 00:15:33.949 "listen_addresses": [ 00:15:33.949 { 00:15:33.949 "trtype": "VFIOUSER", 00:15:33.949 "adrfam": "IPv4", 00:15:33.949 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:33.949 "trsvcid": "0" 00:15:33.949 } 00:15:33.950 ], 00:15:33.950 "allow_any_host": true, 00:15:33.950 "hosts": [], 00:15:33.950 "serial_number": "SPDK1", 00:15:33.950 "model_number": "SPDK bdev Controller", 00:15:33.950 "max_namespaces": 32, 00:15:33.950 "min_cntlid": 1, 00:15:33.950 "max_cntlid": 65519, 00:15:33.950 "namespaces": [ 00:15:33.950 { 00:15:33.950 "nsid": 1, 00:15:33.950 "bdev_name": "Malloc1", 00:15:33.950 "name": "Malloc1", 00:15:33.950 "nguid": "7E7F0F5EA2D747B2BCCB000DBBC7CED0", 00:15:33.950 "uuid": "7e7f0f5e-a2d7-47b2-bccb-000dbbc7ced0" 00:15:33.950 }, 00:15:33.950 { 00:15:33.950 "nsid": 2, 00:15:33.950 "bdev_name": "Malloc3", 00:15:33.950 "name": "Malloc3", 00:15:33.950 "nguid": "846EBEE8BF8E4D06B805BE0DB18BDA62", 00:15:33.950 "uuid": "846ebee8-bf8e-4d06-b805-be0db18bda62" 00:15:33.950 } 00:15:33.950 ] 00:15:33.950 }, 00:15:33.950 { 00:15:33.950 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:33.950 "subtype": "NVMe", 00:15:33.950 "listen_addresses": [ 00:15:33.950 { 00:15:33.950 "trtype": "VFIOUSER", 00:15:33.950 "adrfam": "IPv4", 00:15:33.950 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:33.950 "trsvcid": "0" 00:15:33.950 } 00:15:33.950 ], 00:15:33.950 "allow_any_host": true, 00:15:33.950 "hosts": [], 00:15:33.950 "serial_number": "SPDK2", 00:15:33.950 "model_number": "SPDK bdev Controller", 00:15:33.950 "max_namespaces": 32, 00:15:33.950 "min_cntlid": 1, 00:15:33.950 "max_cntlid": 65519, 00:15:33.950 "namespaces": [ 00:15:33.950 { 00:15:33.950 "nsid": 1, 00:15:33.950 "bdev_name": "Malloc2", 00:15:33.950 "name": "Malloc2", 00:15:33.950 "nguid": "76C3E51DE1D94D3EB4A4FD44522EEC9A", 00:15:33.950 "uuid": "76c3e51d-e1d9-4d3e-b4a4-fd44522eec9a" 00:15:33.950 } 00:15:33.950 ] 00:15:33.950 } 00:15:33.950 ] 00:15:33.950 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:33.950 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1045118 00:15:33.950 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:33.950 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:33.950 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:33.950 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:33.950 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:33.950 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:33.950 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:33.950 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:34.240 [2024-12-08 06:19:24.145478] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.240 Malloc4 00:15:34.240 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:34.514 [2024-12-08 06:19:24.555563] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.514 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:34.514 Asynchronous Event Request test 00:15:34.514 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:34.514 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:34.514 Registering asynchronous event callbacks... 00:15:34.514 Starting namespace attribute notice tests for all controllers... 00:15:34.514 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:34.514 aer_cb - Changed Namespace 00:15:34.514 Cleaning up... 00:15:34.773 [ 00:15:34.773 { 00:15:34.773 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:34.773 "subtype": "Discovery", 00:15:34.773 "listen_addresses": [], 00:15:34.773 "allow_any_host": true, 00:15:34.773 "hosts": [] 00:15:34.773 }, 00:15:34.773 { 00:15:34.773 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:34.773 "subtype": "NVMe", 00:15:34.773 "listen_addresses": [ 00:15:34.773 { 00:15:34.773 "trtype": "VFIOUSER", 00:15:34.773 "adrfam": "IPv4", 00:15:34.773 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:34.773 "trsvcid": "0" 00:15:34.773 } 00:15:34.773 ], 00:15:34.773 "allow_any_host": true, 00:15:34.773 "hosts": [], 00:15:34.773 "serial_number": "SPDK1", 00:15:34.773 "model_number": "SPDK bdev Controller", 00:15:34.773 "max_namespaces": 32, 00:15:34.773 "min_cntlid": 1, 00:15:34.773 "max_cntlid": 65519, 00:15:34.773 "namespaces": [ 00:15:34.773 { 00:15:34.773 "nsid": 1, 00:15:34.773 "bdev_name": "Malloc1", 00:15:34.773 "name": "Malloc1", 00:15:34.773 "nguid": "7E7F0F5EA2D747B2BCCB000DBBC7CED0", 00:15:34.773 "uuid": "7e7f0f5e-a2d7-47b2-bccb-000dbbc7ced0" 00:15:34.773 }, 00:15:34.773 { 00:15:34.773 "nsid": 2, 00:15:34.773 "bdev_name": "Malloc3", 00:15:34.773 "name": "Malloc3", 00:15:34.773 "nguid": "846EBEE8BF8E4D06B805BE0DB18BDA62", 00:15:34.773 "uuid": "846ebee8-bf8e-4d06-b805-be0db18bda62" 00:15:34.773 } 00:15:34.773 ] 00:15:34.773 }, 00:15:34.773 { 00:15:34.773 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:34.773 "subtype": "NVMe", 00:15:34.773 "listen_addresses": [ 00:15:34.773 { 00:15:34.773 "trtype": "VFIOUSER", 00:15:34.773 "adrfam": "IPv4", 00:15:34.773 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:34.773 "trsvcid": "0" 00:15:34.773 } 00:15:34.773 ], 00:15:34.773 "allow_any_host": true, 00:15:34.773 "hosts": [], 00:15:34.773 "serial_number": "SPDK2", 00:15:34.773 "model_number": "SPDK bdev Controller", 00:15:34.773 "max_namespaces": 32, 00:15:34.773 "min_cntlid": 1, 00:15:34.773 "max_cntlid": 65519, 00:15:34.773 "namespaces": [ 00:15:34.773 { 00:15:34.773 "nsid": 1, 00:15:34.773 "bdev_name": "Malloc2", 00:15:34.773 "name": "Malloc2", 00:15:34.773 "nguid": "76C3E51DE1D94D3EB4A4FD44522EEC9A", 00:15:34.773 "uuid": "76c3e51d-e1d9-4d3e-b4a4-fd44522eec9a" 00:15:34.773 }, 00:15:34.773 { 00:15:34.773 "nsid": 2, 00:15:34.773 "bdev_name": "Malloc4", 00:15:34.773 "name": "Malloc4", 00:15:34.773 "nguid": "E80EC89234664A2D9336D9DAE82D7CCB", 00:15:34.773 "uuid": "e80ec892-3466-4a2d-9336-d9dae82d7ccb" 00:15:34.773 } 00:15:34.773 ] 00:15:34.773 } 00:15:34.773 ] 00:15:34.773 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1045118 00:15:34.773 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:34.773 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1039515 00:15:34.773 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1039515 ']' 00:15:34.773 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1039515 00:15:34.773 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:34.773 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.773 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1039515 00:15:34.773 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:34.773 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:34.773 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1039515' 00:15:34.773 killing process with pid 1039515 00:15:34.773 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1039515 00:15:34.773 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1039515 00:15:35.339 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:35.339 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:35.339 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:35.339 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:35.339 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:35.339 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1045264 00:15:35.339 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:35.339 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1045264' 00:15:35.339 Process pid: 1045264 00:15:35.339 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:35.339 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1045264 00:15:35.339 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1045264 ']' 00:15:35.339 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.339 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.339 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.339 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.339 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:35.339 [2024-12-08 06:19:25.221815] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:35.339 [2024-12-08 06:19:25.222858] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:15:35.339 [2024-12-08 06:19:25.222929] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.339 [2024-12-08 06:19:25.287491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:35.339 [2024-12-08 06:19:25.342162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.339 [2024-12-08 06:19:25.342229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.339 [2024-12-08 06:19:25.342268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.339 [2024-12-08 06:19:25.342279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.339 [2024-12-08 06:19:25.342289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.339 [2024-12-08 06:19:25.343881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.339 [2024-12-08 06:19:25.343937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.339 [2024-12-08 06:19:25.344004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:35.340 [2024-12-08 06:19:25.344008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.340 [2024-12-08 06:19:25.430695] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:35.340 [2024-12-08 06:19:25.430943] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:35.340 [2024-12-08 06:19:25.431198] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:35.340 [2024-12-08 06:19:25.431766] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:35.340 [2024-12-08 06:19:25.431990] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:35.340 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.340 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:35.340 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:36.719 06:19:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:36.719 06:19:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:36.719 06:19:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:36.719 06:19:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:36.719 06:19:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:36.719 06:19:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:36.978 Malloc1 00:15:36.978 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:37.547 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:37.806 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:38.064 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:38.064 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:38.064 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:38.321 Malloc2 00:15:38.321 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:38.580 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:38.838 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:39.097 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:39.097 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1045264 00:15:39.097 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1045264 ']' 00:15:39.097 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1045264 00:15:39.097 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:39.097 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:39.097 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1045264 00:15:39.097 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:39.097 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:39.097 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1045264' 00:15:39.097 killing process with pid 1045264 00:15:39.097 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1045264 00:15:39.097 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1045264 00:15:39.355 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:39.355 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:39.355 00:15:39.355 real 0m53.556s 00:15:39.355 user 3m27.020s 00:15:39.355 sys 0m3.945s 00:15:39.355 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.355 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:39.355 ************************************ 00:15:39.355 END TEST nvmf_vfio_user 00:15:39.355 ************************************ 00:15:39.355 06:19:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:39.355 06:19:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:39.355 06:19:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.355 06:19:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:39.355 ************************************ 00:15:39.355 START TEST nvmf_vfio_user_nvme_compliance 00:15:39.355 ************************************ 00:15:39.355 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:39.355 * Looking for test storage... 00:15:39.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:39.355 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:39.355 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:15:39.355 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:39.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.613 --rc genhtml_branch_coverage=1 00:15:39.613 --rc genhtml_function_coverage=1 00:15:39.613 --rc genhtml_legend=1 00:15:39.613 --rc geninfo_all_blocks=1 00:15:39.613 --rc geninfo_unexecuted_blocks=1 00:15:39.613 00:15:39.613 ' 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:39.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.613 --rc genhtml_branch_coverage=1 00:15:39.613 --rc genhtml_function_coverage=1 00:15:39.613 --rc genhtml_legend=1 00:15:39.613 --rc geninfo_all_blocks=1 00:15:39.613 --rc geninfo_unexecuted_blocks=1 00:15:39.613 00:15:39.613 ' 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:39.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.613 --rc genhtml_branch_coverage=1 00:15:39.613 --rc genhtml_function_coverage=1 00:15:39.613 --rc genhtml_legend=1 00:15:39.613 --rc geninfo_all_blocks=1 00:15:39.613 --rc geninfo_unexecuted_blocks=1 00:15:39.613 00:15:39.613 ' 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:39.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.613 --rc genhtml_branch_coverage=1 00:15:39.613 --rc genhtml_function_coverage=1 00:15:39.613 --rc genhtml_legend=1 00:15:39.613 --rc geninfo_all_blocks=1 00:15:39.613 --rc geninfo_unexecuted_blocks=1 00:15:39.613 00:15:39.613 ' 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.613 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:39.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1045879 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1045879' 00:15:39.614 Process pid: 1045879 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1045879 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1045879 ']' 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.614 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:39.614 [2024-12-08 06:19:29.628316] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:15:39.614 [2024-12-08 06:19:29.628408] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.614 [2024-12-08 06:19:29.695238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:39.872 [2024-12-08 06:19:29.749669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.872 [2024-12-08 06:19:29.749734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.872 [2024-12-08 06:19:29.749767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.872 [2024-12-08 06:19:29.749778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.872 [2024-12-08 06:19:29.749787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.872 [2024-12-08 06:19:29.751171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.872 [2024-12-08 06:19:29.751235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.872 [2024-12-08 06:19:29.751238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.872 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.872 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:39.872 06:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:40.810 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:40.810 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:40.810 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:40.810 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.810 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.810 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.810 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:40.810 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:40.810 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.810 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.810 malloc0 00:15:40.810 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.810 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:40.810 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.810 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.810 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.810 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:41.087 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.087 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.087 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.087 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:41.087 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.087 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.087 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.087 06:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:41.087 00:15:41.087 00:15:41.087 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.087 http://cunit.sourceforge.net/ 00:15:41.087 00:15:41.087 00:15:41.087 Suite: nvme_compliance 00:15:41.087 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-08 06:19:31.125204] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.087 [2024-12-08 06:19:31.126735] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:41.087 [2024-12-08 06:19:31.126770] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:41.087 [2024-12-08 06:19:31.126784] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:41.087 [2024-12-08 06:19:31.128231] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.087 passed 00:15:41.347 Test: admin_identify_ctrlr_verify_fused ...[2024-12-08 06:19:31.212867] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.347 [2024-12-08 06:19:31.215878] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.347 passed 00:15:41.347 Test: admin_identify_ns ...[2024-12-08 06:19:31.303187] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.347 [2024-12-08 06:19:31.362738] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:41.347 [2024-12-08 06:19:31.370737] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:41.347 [2024-12-08 06:19:31.391855] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.347 passed 00:15:41.607 Test: admin_get_features_mandatory_features ...[2024-12-08 06:19:31.474348] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.607 [2024-12-08 06:19:31.477375] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.607 passed 00:15:41.607 Test: admin_get_features_optional_features ...[2024-12-08 06:19:31.562957] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.607 [2024-12-08 06:19:31.565979] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.607 passed 00:15:41.607 Test: admin_set_features_number_of_queues ...[2024-12-08 06:19:31.648214] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.866 [2024-12-08 06:19:31.752841] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.866 passed 00:15:41.866 Test: admin_get_log_page_mandatory_logs ...[2024-12-08 06:19:31.836335] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.866 [2024-12-08 06:19:31.839359] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.866 passed 00:15:41.866 Test: admin_get_log_page_with_lpo ...[2024-12-08 06:19:31.920506] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.125 [2024-12-08 06:19:31.987742] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:42.125 [2024-12-08 06:19:32.000824] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.125 passed 00:15:42.125 Test: fabric_property_get ...[2024-12-08 06:19:32.084901] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.125 [2024-12-08 06:19:32.086197] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:42.125 [2024-12-08 06:19:32.087926] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.125 passed 00:15:42.125 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-08 06:19:32.172467] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.125 [2024-12-08 06:19:32.173773] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:42.125 [2024-12-08 06:19:32.175489] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.125 passed 00:15:42.384 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-08 06:19:32.257416] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.384 [2024-12-08 06:19:32.340733] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:42.384 [2024-12-08 06:19:32.356731] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:42.384 [2024-12-08 06:19:32.361842] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.384 passed 00:15:42.384 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-08 06:19:32.445363] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.384 [2024-12-08 06:19:32.446654] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:42.384 [2024-12-08 06:19:32.448382] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.384 passed 00:15:42.645 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-08 06:19:32.532213] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.645 [2024-12-08 06:19:32.607735] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:42.645 [2024-12-08 06:19:32.631733] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:42.645 [2024-12-08 06:19:32.636840] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.645 passed 00:15:42.645 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-08 06:19:32.719347] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.645 [2024-12-08 06:19:32.720633] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:42.645 [2024-12-08 06:19:32.720672] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:42.645 [2024-12-08 06:19:32.722373] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.645 passed 00:15:42.905 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-08 06:19:32.805244] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.905 [2024-12-08 06:19:32.896730] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:42.905 [2024-12-08 06:19:32.904734] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:42.905 [2024-12-08 06:19:32.912735] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:42.905 [2024-12-08 06:19:32.920732] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:42.905 [2024-12-08 06:19:32.949830] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.905 passed 00:15:43.166 Test: admin_create_io_sq_verify_pc ...[2024-12-08 06:19:33.033381] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.166 [2024-12-08 06:19:33.049746] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:43.166 [2024-12-08 06:19:33.067800] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.166 passed 00:15:43.166 Test: admin_create_io_qp_max_qps ...[2024-12-08 06:19:33.148344] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.547 [2024-12-08 06:19:34.254754] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:44.547 [2024-12-08 06:19:34.638026] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.805 passed 00:15:44.805 Test: admin_create_io_sq_shared_cq ...[2024-12-08 06:19:34.719336] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.805 [2024-12-08 06:19:34.850735] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:44.805 [2024-12-08 06:19:34.887814] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.805 passed 00:15:44.805 00:15:44.805 Run Summary: Type Total Ran Passed Failed Inactive 00:15:44.805 suites 1 1 n/a 0 0 00:15:44.805 tests 18 18 18 0 0 00:15:44.805 asserts 360 360 360 0 n/a 00:15:44.805 00:15:44.805 Elapsed time = 1.560 seconds 00:15:45.062 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1045879 00:15:45.062 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1045879 ']' 00:15:45.062 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1045879 00:15:45.062 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:45.062 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.062 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1045879 00:15:45.062 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:45.062 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:45.062 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1045879' 00:15:45.062 killing process with pid 1045879 00:15:45.062 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1045879 00:15:45.062 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1045879 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:45.320 00:15:45.320 real 0m5.790s 00:15:45.320 user 0m16.188s 00:15:45.320 sys 0m0.575s 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:45.320 ************************************ 00:15:45.320 END TEST nvmf_vfio_user_nvme_compliance 00:15:45.320 ************************************ 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:45.320 ************************************ 00:15:45.320 START TEST nvmf_vfio_user_fuzz 00:15:45.320 ************************************ 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:45.320 * Looking for test storage... 00:15:45.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:45.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.320 --rc genhtml_branch_coverage=1 00:15:45.320 --rc genhtml_function_coverage=1 00:15:45.320 --rc genhtml_legend=1 00:15:45.320 --rc geninfo_all_blocks=1 00:15:45.320 --rc geninfo_unexecuted_blocks=1 00:15:45.320 00:15:45.320 ' 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:45.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.320 --rc genhtml_branch_coverage=1 00:15:45.320 --rc genhtml_function_coverage=1 00:15:45.320 --rc genhtml_legend=1 00:15:45.320 --rc geninfo_all_blocks=1 00:15:45.320 --rc geninfo_unexecuted_blocks=1 00:15:45.320 00:15:45.320 ' 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:45.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.320 --rc genhtml_branch_coverage=1 00:15:45.320 --rc genhtml_function_coverage=1 00:15:45.320 --rc genhtml_legend=1 00:15:45.320 --rc geninfo_all_blocks=1 00:15:45.320 --rc geninfo_unexecuted_blocks=1 00:15:45.320 00:15:45.320 ' 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:45.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.320 --rc genhtml_branch_coverage=1 00:15:45.320 --rc genhtml_function_coverage=1 00:15:45.320 --rc genhtml_legend=1 00:15:45.320 --rc geninfo_all_blocks=1 00:15:45.320 --rc geninfo_unexecuted_blocks=1 00:15:45.320 00:15:45.320 ' 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:45.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1046604 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1046604' 00:15:45.320 Process pid: 1046604 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1046604 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1046604 ']' 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.320 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:45.886 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.886 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:45.886 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:46.823 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:46.823 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.823 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.824 malloc0 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:46.824 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:18.901 Fuzzing completed. Shutting down the fuzz application 00:16:18.901 00:16:18.901 Dumping successful admin opcodes: 00:16:18.901 9, 10, 00:16:18.901 Dumping successful io opcodes: 00:16:18.901 0, 00:16:18.901 NS: 0x20000081ef00 I/O qp, Total commands completed: 662559, total successful commands: 2586, random_seed: 585655296 00:16:18.901 NS: 0x20000081ef00 admin qp, Total commands completed: 110400, total successful commands: 27, random_seed: 821634368 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1046604 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1046604 ']' 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1046604 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1046604 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1046604' 00:16:18.901 killing process with pid 1046604 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1046604 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1046604 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:18.901 00:16:18.901 real 0m32.243s 00:16:18.901 user 0m30.270s 00:16:18.901 sys 0m29.556s 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.901 ************************************ 00:16:18.901 END TEST nvmf_vfio_user_fuzz 00:16:18.901 ************************************ 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:18.901 ************************************ 00:16:18.901 START TEST nvmf_auth_target 00:16:18.901 ************************************ 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:18.901 * Looking for test storage... 00:16:18.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:18.901 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:18.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.902 --rc genhtml_branch_coverage=1 00:16:18.902 --rc genhtml_function_coverage=1 00:16:18.902 --rc genhtml_legend=1 00:16:18.902 --rc geninfo_all_blocks=1 00:16:18.902 --rc geninfo_unexecuted_blocks=1 00:16:18.902 00:16:18.902 ' 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:18.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.902 --rc genhtml_branch_coverage=1 00:16:18.902 --rc genhtml_function_coverage=1 00:16:18.902 --rc genhtml_legend=1 00:16:18.902 --rc geninfo_all_blocks=1 00:16:18.902 --rc geninfo_unexecuted_blocks=1 00:16:18.902 00:16:18.902 ' 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:18.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.902 --rc genhtml_branch_coverage=1 00:16:18.902 --rc genhtml_function_coverage=1 00:16:18.902 --rc genhtml_legend=1 00:16:18.902 --rc geninfo_all_blocks=1 00:16:18.902 --rc geninfo_unexecuted_blocks=1 00:16:18.902 00:16:18.902 ' 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:18.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.902 --rc genhtml_branch_coverage=1 00:16:18.902 --rc genhtml_function_coverage=1 00:16:18.902 --rc genhtml_legend=1 00:16:18.902 --rc geninfo_all_blocks=1 00:16:18.902 --rc geninfo_unexecuted_blocks=1 00:16:18.902 00:16:18.902 ' 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:18.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:18.902 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.903 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:18.903 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:18.903 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:18.903 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.903 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:18.903 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.903 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:18.903 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:18.903 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:18.903 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.856 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:19.856 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:19.856 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:19.856 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:19.856 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:19.856 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:19.856 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:19.856 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:19.856 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:19.856 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:19.856 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:19.857 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:19.857 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:19.857 Found net devices under 0000:84:00.0: cvl_0_0 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:19.857 Found net devices under 0000:84:00.1: cvl_0_1 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:19.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:16:19.857 00:16:19.857 --- 10.0.0.2 ping statistics --- 00:16:19.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.857 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:19.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:16:19.857 00:16:19.857 --- 10.0.0.1 ping statistics --- 00:16:19.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.857 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:19.857 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:19.858 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:19.858 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:19.858 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.858 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1052073 00:16:19.858 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:20.117 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1052073 00:16:20.117 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1052073 ']' 00:16:20.117 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.117 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.117 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.117 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.117 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1052218 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d343734cd094d095a655ad9c96c8748239a61099017bed46 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.k5B 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d343734cd094d095a655ad9c96c8748239a61099017bed46 0 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d343734cd094d095a655ad9c96c8748239a61099017bed46 0 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d343734cd094d095a655ad9c96c8748239a61099017bed46 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.k5B 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.k5B 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.k5B 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=64d8555d2d129fcc9cfab93be9cd4fe79d7bcd7b2301eb08d951f6e6b8a2441c 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.CGh 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 64d8555d2d129fcc9cfab93be9cd4fe79d7bcd7b2301eb08d951f6e6b8a2441c 3 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 64d8555d2d129fcc9cfab93be9cd4fe79d7bcd7b2301eb08d951f6e6b8a2441c 3 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=64d8555d2d129fcc9cfab93be9cd4fe79d7bcd7b2301eb08d951f6e6b8a2441c 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.CGh 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.CGh 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.CGh 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9da197f1c51415c1476d0a0fb4010292 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.A52 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9da197f1c51415c1476d0a0fb4010292 1 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9da197f1c51415c1476d0a0fb4010292 1 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9da197f1c51415c1476d0a0fb4010292 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.A52 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.A52 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.A52 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7204ce6f51b12032eed5cbd27bfc5e39223f527436cc7681 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.CDW 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7204ce6f51b12032eed5cbd27bfc5e39223f527436cc7681 2 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7204ce6f51b12032eed5cbd27bfc5e39223f527436cc7681 2 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7204ce6f51b12032eed5cbd27bfc5e39223f527436cc7681 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.CDW 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.CDW 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.CDW 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=57b610cb63b29fb6db1b1162cab28d8509ce176c5b82add9 00:16:20.375 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Xcc 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 57b610cb63b29fb6db1b1162cab28d8509ce176c5b82add9 2 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 57b610cb63b29fb6db1b1162cab28d8509ce176c5b82add9 2 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=57b610cb63b29fb6db1b1162cab28d8509ce176c5b82add9 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Xcc 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Xcc 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Xcc 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cab279686f7a49047e01217619d5ba4e 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.1Fo 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cab279686f7a49047e01217619d5ba4e 1 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cab279686f7a49047e01217619d5ba4e 1 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cab279686f7a49047e01217619d5ba4e 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.1Fo 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.1Fo 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.1Fo 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=77323045c715c9ff20e207ba8d3b44c1252eba314605d5f8d284939a6af5cd1e 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Ufw 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 77323045c715c9ff20e207ba8d3b44c1252eba314605d5f8d284939a6af5cd1e 3 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 77323045c715c9ff20e207ba8d3b44c1252eba314605d5f8d284939a6af5cd1e 3 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=77323045c715c9ff20e207ba8d3b44c1252eba314605d5f8d284939a6af5cd1e 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Ufw 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Ufw 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Ufw 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1052073 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1052073 ']' 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.633 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.890 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:20.890 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:20.890 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1052218 /var/tmp/host.sock 00:16:20.890 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1052218 ']' 00:16:20.890 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:20.890 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.890 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:20.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:20.890 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.890 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.159 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.159 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:21.159 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:21.159 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.159 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.159 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.159 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:21.159 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.k5B 00:16:21.159 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.159 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.159 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.159 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.k5B 00:16:21.159 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.k5B 00:16:21.416 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.CGh ]] 00:16:21.416 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CGh 00:16:21.416 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.416 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.416 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.416 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CGh 00:16:21.416 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CGh 00:16:21.674 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:21.674 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.A52 00:16:21.674 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.674 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.674 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.674 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.A52 00:16:21.674 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.A52 00:16:21.933 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.CDW ]] 00:16:21.933 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CDW 00:16:21.933 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.933 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.933 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.933 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CDW 00:16:21.933 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CDW 00:16:22.193 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:22.193 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Xcc 00:16:22.193 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.193 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.452 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.452 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Xcc 00:16:22.452 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Xcc 00:16:22.709 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.1Fo ]] 00:16:22.709 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Fo 00:16:22.709 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.709 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.709 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.709 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Fo 00:16:22.709 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Fo 00:16:22.967 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:22.967 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Ufw 00:16:22.967 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.967 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.967 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.967 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Ufw 00:16:22.967 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Ufw 00:16:23.225 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:23.225 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:23.225 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.225 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.225 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:23.225 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:23.483 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:23.483 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.483 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.483 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:23.483 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:23.483 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.483 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.483 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.483 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.483 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.483 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.483 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.483 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.741 00:16:23.741 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.741 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.741 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.000 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.000 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.000 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.000 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.000 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.000 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.000 { 00:16:24.000 "cntlid": 1, 00:16:24.000 "qid": 0, 00:16:24.000 "state": "enabled", 00:16:24.000 "thread": "nvmf_tgt_poll_group_000", 00:16:24.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:24.000 "listen_address": { 00:16:24.000 "trtype": "TCP", 00:16:24.000 "adrfam": "IPv4", 00:16:24.000 "traddr": "10.0.0.2", 00:16:24.000 "trsvcid": "4420" 00:16:24.000 }, 00:16:24.000 "peer_address": { 00:16:24.000 "trtype": "TCP", 00:16:24.000 "adrfam": "IPv4", 00:16:24.000 "traddr": "10.0.0.1", 00:16:24.000 "trsvcid": "48570" 00:16:24.000 }, 00:16:24.000 "auth": { 00:16:24.000 "state": "completed", 00:16:24.000 "digest": "sha256", 00:16:24.000 "dhgroup": "null" 00:16:24.000 } 00:16:24.000 } 00:16:24.000 ]' 00:16:24.000 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.000 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.000 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.000 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:24.000 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.258 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.258 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.258 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.515 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:16:24.516 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:16:25.454 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.454 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:25.454 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.454 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.454 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.454 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.454 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.454 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.454 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:25.454 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.454 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.454 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:25.454 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:25.454 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.454 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.454 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.454 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.713 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.713 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.713 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.713 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.970 00:16:25.970 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.970 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.970 06:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.227 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.227 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.227 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.227 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.227 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.227 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.227 { 00:16:26.227 "cntlid": 3, 00:16:26.227 "qid": 0, 00:16:26.227 "state": "enabled", 00:16:26.227 "thread": "nvmf_tgt_poll_group_000", 00:16:26.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:26.227 "listen_address": { 00:16:26.227 "trtype": "TCP", 00:16:26.227 "adrfam": "IPv4", 00:16:26.227 "traddr": "10.0.0.2", 00:16:26.227 "trsvcid": "4420" 00:16:26.227 }, 00:16:26.227 "peer_address": { 00:16:26.227 "trtype": "TCP", 00:16:26.227 "adrfam": "IPv4", 00:16:26.227 "traddr": "10.0.0.1", 00:16:26.227 "trsvcid": "48596" 00:16:26.227 }, 00:16:26.227 "auth": { 00:16:26.227 "state": "completed", 00:16:26.227 "digest": "sha256", 00:16:26.227 "dhgroup": "null" 00:16:26.227 } 00:16:26.227 } 00:16:26.227 ]' 00:16:26.227 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.227 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.227 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.227 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:26.227 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.227 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.227 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.227 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.484 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:16:26.484 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:16:27.417 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.417 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:27.417 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.417 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.417 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.417 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.417 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.417 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.676 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:27.676 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.676 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.676 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:27.676 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.676 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.676 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.676 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.676 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.676 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.676 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.676 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.676 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.932 00:16:28.191 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.191 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.191 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.449 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.449 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.449 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.449 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.449 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.449 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.449 { 00:16:28.449 "cntlid": 5, 00:16:28.449 "qid": 0, 00:16:28.449 "state": "enabled", 00:16:28.449 "thread": "nvmf_tgt_poll_group_000", 00:16:28.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:28.449 "listen_address": { 00:16:28.449 "trtype": "TCP", 00:16:28.449 "adrfam": "IPv4", 00:16:28.449 "traddr": "10.0.0.2", 00:16:28.449 "trsvcid": "4420" 00:16:28.449 }, 00:16:28.449 "peer_address": { 00:16:28.449 "trtype": "TCP", 00:16:28.449 "adrfam": "IPv4", 00:16:28.449 "traddr": "10.0.0.1", 00:16:28.449 "trsvcid": "48630" 00:16:28.449 }, 00:16:28.449 "auth": { 00:16:28.449 "state": "completed", 00:16:28.449 "digest": "sha256", 00:16:28.449 "dhgroup": "null" 00:16:28.449 } 00:16:28.449 } 00:16:28.449 ]' 00:16:28.449 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.449 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.449 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.449 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:28.449 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.449 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.449 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.449 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.706 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:16:28.706 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:16:29.641 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.641 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:29.641 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.641 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.641 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.641 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.641 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:29.641 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:29.899 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:29.899 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.899 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.899 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:29.899 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:29.899 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.899 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:29.899 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.899 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.899 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.899 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:29.899 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.899 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.185 00:16:30.185 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.185 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.185 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.471 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.471 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.471 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.471 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.471 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.471 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.471 { 00:16:30.471 "cntlid": 7, 00:16:30.471 "qid": 0, 00:16:30.471 "state": "enabled", 00:16:30.471 "thread": "nvmf_tgt_poll_group_000", 00:16:30.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:30.471 "listen_address": { 00:16:30.471 "trtype": "TCP", 00:16:30.471 "adrfam": "IPv4", 00:16:30.471 "traddr": "10.0.0.2", 00:16:30.471 "trsvcid": "4420" 00:16:30.471 }, 00:16:30.471 "peer_address": { 00:16:30.471 "trtype": "TCP", 00:16:30.471 "adrfam": "IPv4", 00:16:30.471 "traddr": "10.0.0.1", 00:16:30.471 "trsvcid": "37852" 00:16:30.471 }, 00:16:30.471 "auth": { 00:16:30.471 "state": "completed", 00:16:30.471 "digest": "sha256", 00:16:30.471 "dhgroup": "null" 00:16:30.471 } 00:16:30.471 } 00:16:30.471 ]' 00:16:30.471 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.471 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.471 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.727 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:30.727 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.727 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.727 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.727 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.982 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:16:30.982 06:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:16:31.916 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.916 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:31.916 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.916 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.916 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.916 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.916 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.916 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.916 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.173 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:32.173 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.173 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.173 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:32.173 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:32.174 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.174 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.174 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.174 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.174 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.174 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.174 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.174 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.431 00:16:32.431 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.431 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.431 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.688 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.688 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.688 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.688 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.688 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.688 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.688 { 00:16:32.688 "cntlid": 9, 00:16:32.688 "qid": 0, 00:16:32.688 "state": "enabled", 00:16:32.688 "thread": "nvmf_tgt_poll_group_000", 00:16:32.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:32.688 "listen_address": { 00:16:32.688 "trtype": "TCP", 00:16:32.688 "adrfam": "IPv4", 00:16:32.688 "traddr": "10.0.0.2", 00:16:32.688 "trsvcid": "4420" 00:16:32.688 }, 00:16:32.688 "peer_address": { 00:16:32.688 "trtype": "TCP", 00:16:32.688 "adrfam": "IPv4", 00:16:32.688 "traddr": "10.0.0.1", 00:16:32.688 "trsvcid": "37874" 00:16:32.688 }, 00:16:32.688 "auth": { 00:16:32.688 "state": "completed", 00:16:32.688 "digest": "sha256", 00:16:32.688 "dhgroup": "ffdhe2048" 00:16:32.688 } 00:16:32.688 } 00:16:32.688 ]' 00:16:32.688 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.688 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.688 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.951 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.951 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.951 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.951 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.951 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.208 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:16:33.208 06:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:16:34.146 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.146 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:34.146 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.146 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.146 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.146 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.146 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.146 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.404 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:34.404 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.404 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.404 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:34.404 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.404 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.404 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.404 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.404 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.404 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.404 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.404 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.404 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.662 00:16:34.662 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.662 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.662 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.919 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.919 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.919 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.919 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.919 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.919 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.919 { 00:16:34.919 "cntlid": 11, 00:16:34.919 "qid": 0, 00:16:34.919 "state": "enabled", 00:16:34.919 "thread": "nvmf_tgt_poll_group_000", 00:16:34.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:34.919 "listen_address": { 00:16:34.919 "trtype": "TCP", 00:16:34.919 "adrfam": "IPv4", 00:16:34.919 "traddr": "10.0.0.2", 00:16:34.919 "trsvcid": "4420" 00:16:34.919 }, 00:16:34.919 "peer_address": { 00:16:34.919 "trtype": "TCP", 00:16:34.919 "adrfam": "IPv4", 00:16:34.919 "traddr": "10.0.0.1", 00:16:34.919 "trsvcid": "37894" 00:16:34.919 }, 00:16:34.919 "auth": { 00:16:34.919 "state": "completed", 00:16:34.919 "digest": "sha256", 00:16:34.919 "dhgroup": "ffdhe2048" 00:16:34.919 } 00:16:34.919 } 00:16:34.919 ]' 00:16:34.919 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.919 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.919 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.176 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:35.176 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.176 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.176 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.176 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.432 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:16:35.432 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:16:36.364 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.364 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:36.364 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.364 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.364 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.364 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.364 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:36.364 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:36.622 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:36.622 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.622 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.622 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:36.622 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.622 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.622 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.622 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.623 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.623 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.623 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.623 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.623 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.881 00:16:36.881 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.881 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.881 06:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.138 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.138 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.138 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.138 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.138 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.138 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.138 { 00:16:37.138 "cntlid": 13, 00:16:37.138 "qid": 0, 00:16:37.138 "state": "enabled", 00:16:37.138 "thread": "nvmf_tgt_poll_group_000", 00:16:37.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:37.138 "listen_address": { 00:16:37.138 "trtype": "TCP", 00:16:37.138 "adrfam": "IPv4", 00:16:37.138 "traddr": "10.0.0.2", 00:16:37.138 "trsvcid": "4420" 00:16:37.138 }, 00:16:37.138 "peer_address": { 00:16:37.138 "trtype": "TCP", 00:16:37.138 "adrfam": "IPv4", 00:16:37.138 "traddr": "10.0.0.1", 00:16:37.138 "trsvcid": "37922" 00:16:37.138 }, 00:16:37.138 "auth": { 00:16:37.138 "state": "completed", 00:16:37.138 "digest": "sha256", 00:16:37.138 "dhgroup": "ffdhe2048" 00:16:37.138 } 00:16:37.138 } 00:16:37.138 ]' 00:16:37.138 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.138 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.138 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.396 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:37.396 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.396 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.396 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.396 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.654 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:16:37.654 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:16:38.589 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.589 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:38.589 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.589 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.589 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.589 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.589 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.589 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.846 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:38.846 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.846 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.846 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:38.846 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:38.846 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.846 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:38.846 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.846 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.846 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.846 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:38.846 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.846 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.104 00:16:39.104 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.104 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.104 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.362 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.362 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.362 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.362 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.362 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.362 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.362 { 00:16:39.362 "cntlid": 15, 00:16:39.362 "qid": 0, 00:16:39.362 "state": "enabled", 00:16:39.362 "thread": "nvmf_tgt_poll_group_000", 00:16:39.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:39.362 "listen_address": { 00:16:39.362 "trtype": "TCP", 00:16:39.362 "adrfam": "IPv4", 00:16:39.362 "traddr": "10.0.0.2", 00:16:39.362 "trsvcid": "4420" 00:16:39.362 }, 00:16:39.362 "peer_address": { 00:16:39.362 "trtype": "TCP", 00:16:39.362 "adrfam": "IPv4", 00:16:39.362 "traddr": "10.0.0.1", 00:16:39.362 "trsvcid": "37946" 00:16:39.362 }, 00:16:39.362 "auth": { 00:16:39.362 "state": "completed", 00:16:39.362 "digest": "sha256", 00:16:39.362 "dhgroup": "ffdhe2048" 00:16:39.362 } 00:16:39.362 } 00:16:39.362 ]' 00:16:39.362 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.362 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.362 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.362 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:39.362 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.619 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.619 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.619 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.875 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:16:39.875 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:16:40.806 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.806 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:40.806 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.806 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.806 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.806 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.806 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.806 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.806 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:41.063 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:41.063 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.063 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.063 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:41.063 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.063 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.063 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.063 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.063 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.063 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.063 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.063 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.063 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.319 00:16:41.319 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.319 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.319 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.576 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.576 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.576 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.576 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.576 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.576 { 00:16:41.576 "cntlid": 17, 00:16:41.576 "qid": 0, 00:16:41.576 "state": "enabled", 00:16:41.576 "thread": "nvmf_tgt_poll_group_000", 00:16:41.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:41.576 "listen_address": { 00:16:41.576 "trtype": "TCP", 00:16:41.576 "adrfam": "IPv4", 00:16:41.576 "traddr": "10.0.0.2", 00:16:41.576 "trsvcid": "4420" 00:16:41.576 }, 00:16:41.576 "peer_address": { 00:16:41.576 "trtype": "TCP", 00:16:41.576 "adrfam": "IPv4", 00:16:41.576 "traddr": "10.0.0.1", 00:16:41.576 "trsvcid": "49206" 00:16:41.576 }, 00:16:41.576 "auth": { 00:16:41.576 "state": "completed", 00:16:41.576 "digest": "sha256", 00:16:41.576 "dhgroup": "ffdhe3072" 00:16:41.576 } 00:16:41.576 } 00:16:41.576 ]' 00:16:41.576 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.833 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.833 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.833 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:41.833 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.833 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.833 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.833 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.090 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:16:42.090 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:16:43.018 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.018 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:43.018 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.018 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.018 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.018 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.018 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.018 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.275 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:43.275 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.275 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.275 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:43.275 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:43.275 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.275 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.275 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.275 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.275 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.275 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.275 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.275 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.533 00:16:43.533 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.533 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.533 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.790 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.790 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.790 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.790 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.790 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.791 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.791 { 00:16:43.791 "cntlid": 19, 00:16:43.791 "qid": 0, 00:16:43.791 "state": "enabled", 00:16:43.791 "thread": "nvmf_tgt_poll_group_000", 00:16:43.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:43.791 "listen_address": { 00:16:43.791 "trtype": "TCP", 00:16:43.791 "adrfam": "IPv4", 00:16:43.791 "traddr": "10.0.0.2", 00:16:43.791 "trsvcid": "4420" 00:16:43.791 }, 00:16:43.791 "peer_address": { 00:16:43.791 "trtype": "TCP", 00:16:43.791 "adrfam": "IPv4", 00:16:43.791 "traddr": "10.0.0.1", 00:16:43.791 "trsvcid": "49232" 00:16:43.791 }, 00:16:43.791 "auth": { 00:16:43.791 "state": "completed", 00:16:43.791 "digest": "sha256", 00:16:43.791 "dhgroup": "ffdhe3072" 00:16:43.791 } 00:16:43.791 } 00:16:43.791 ]' 00:16:43.791 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.048 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.048 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.048 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.048 06:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.048 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.048 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.048 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.307 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:16:44.307 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:16:45.237 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.237 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:45.237 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.237 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.237 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.237 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.237 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.237 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.512 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:45.512 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.512 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.512 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:45.512 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:45.512 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.512 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.512 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.512 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.512 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.513 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.513 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.513 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.770 00:16:45.770 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.770 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.770 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.028 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.028 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.028 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.028 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.028 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.028 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.028 { 00:16:46.028 "cntlid": 21, 00:16:46.028 "qid": 0, 00:16:46.028 "state": "enabled", 00:16:46.028 "thread": "nvmf_tgt_poll_group_000", 00:16:46.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:46.028 "listen_address": { 00:16:46.028 "trtype": "TCP", 00:16:46.028 "adrfam": "IPv4", 00:16:46.028 "traddr": "10.0.0.2", 00:16:46.029 "trsvcid": "4420" 00:16:46.029 }, 00:16:46.029 "peer_address": { 00:16:46.029 "trtype": "TCP", 00:16:46.029 "adrfam": "IPv4", 00:16:46.029 "traddr": "10.0.0.1", 00:16:46.029 "trsvcid": "49260" 00:16:46.029 }, 00:16:46.029 "auth": { 00:16:46.029 "state": "completed", 00:16:46.029 "digest": "sha256", 00:16:46.029 "dhgroup": "ffdhe3072" 00:16:46.029 } 00:16:46.029 } 00:16:46.029 ]' 00:16:46.029 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.287 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.287 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.287 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.287 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.287 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.287 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.287 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.544 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:16:46.545 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:16:47.478 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.478 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:47.478 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.478 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.478 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.478 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.478 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.478 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.736 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:47.736 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.736 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.736 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:47.736 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:47.736 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.736 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:47.736 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.736 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.736 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.736 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:47.736 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.736 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.993 00:16:47.993 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.993 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.993 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.251 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.251 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.251 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.251 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.509 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.509 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.509 { 00:16:48.509 "cntlid": 23, 00:16:48.509 "qid": 0, 00:16:48.509 "state": "enabled", 00:16:48.509 "thread": "nvmf_tgt_poll_group_000", 00:16:48.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:48.509 "listen_address": { 00:16:48.509 "trtype": "TCP", 00:16:48.509 "adrfam": "IPv4", 00:16:48.509 "traddr": "10.0.0.2", 00:16:48.509 "trsvcid": "4420" 00:16:48.509 }, 00:16:48.509 "peer_address": { 00:16:48.509 "trtype": "TCP", 00:16:48.509 "adrfam": "IPv4", 00:16:48.509 "traddr": "10.0.0.1", 00:16:48.509 "trsvcid": "49284" 00:16:48.509 }, 00:16:48.509 "auth": { 00:16:48.509 "state": "completed", 00:16:48.509 "digest": "sha256", 00:16:48.509 "dhgroup": "ffdhe3072" 00:16:48.509 } 00:16:48.509 } 00:16:48.509 ]' 00:16:48.509 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.509 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.509 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.509 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.509 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.509 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.509 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.509 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.767 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:16:48.767 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:16:49.700 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.700 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:49.700 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.700 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.700 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.700 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.700 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.700 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:49.700 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:49.959 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:49.959 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.959 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.959 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:49.959 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:49.959 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.959 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.959 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.959 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.959 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.959 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.959 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.959 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.524 00:16:50.524 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.524 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.525 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.783 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.783 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.783 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.783 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.783 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.783 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.783 { 00:16:50.783 "cntlid": 25, 00:16:50.783 "qid": 0, 00:16:50.783 "state": "enabled", 00:16:50.783 "thread": "nvmf_tgt_poll_group_000", 00:16:50.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:50.783 "listen_address": { 00:16:50.783 "trtype": "TCP", 00:16:50.783 "adrfam": "IPv4", 00:16:50.783 "traddr": "10.0.0.2", 00:16:50.783 "trsvcid": "4420" 00:16:50.783 }, 00:16:50.783 "peer_address": { 00:16:50.783 "trtype": "TCP", 00:16:50.783 "adrfam": "IPv4", 00:16:50.783 "traddr": "10.0.0.1", 00:16:50.783 "trsvcid": "41030" 00:16:50.783 }, 00:16:50.783 "auth": { 00:16:50.783 "state": "completed", 00:16:50.783 "digest": "sha256", 00:16:50.783 "dhgroup": "ffdhe4096" 00:16:50.783 } 00:16:50.783 } 00:16:50.783 ]' 00:16:50.783 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.783 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.783 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.783 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:50.783 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.783 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.783 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.783 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.041 06:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:16:51.041 06:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:16:51.973 06:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.973 06:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:51.973 06:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.973 06:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.973 06:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.973 06:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.973 06:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:51.973 06:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.230 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:52.230 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.230 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.230 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:52.230 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:52.230 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.230 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.230 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.231 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.231 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.231 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.231 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.231 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.488 00:16:52.488 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.488 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.488 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.053 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.053 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.053 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.053 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.053 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.053 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.053 { 00:16:53.053 "cntlid": 27, 00:16:53.053 "qid": 0, 00:16:53.053 "state": "enabled", 00:16:53.053 "thread": "nvmf_tgt_poll_group_000", 00:16:53.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:53.053 "listen_address": { 00:16:53.053 "trtype": "TCP", 00:16:53.053 "adrfam": "IPv4", 00:16:53.053 "traddr": "10.0.0.2", 00:16:53.053 "trsvcid": "4420" 00:16:53.053 }, 00:16:53.053 "peer_address": { 00:16:53.053 "trtype": "TCP", 00:16:53.053 "adrfam": "IPv4", 00:16:53.053 "traddr": "10.0.0.1", 00:16:53.053 "trsvcid": "41060" 00:16:53.053 }, 00:16:53.053 "auth": { 00:16:53.053 "state": "completed", 00:16:53.053 "digest": "sha256", 00:16:53.053 "dhgroup": "ffdhe4096" 00:16:53.053 } 00:16:53.053 } 00:16:53.053 ]' 00:16:53.053 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.053 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.053 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.053 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.053 06:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.053 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.053 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.053 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.308 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:16:53.309 06:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:16:54.236 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.236 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:54.236 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.236 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.236 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.492 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:54.492 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.492 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.492 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:54.492 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:54.492 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.492 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.492 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.492 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.492 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.492 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.492 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.492 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.748 00:16:54.748 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.748 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.748 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.311 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.311 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.311 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.311 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.311 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.311 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.311 { 00:16:55.311 "cntlid": 29, 00:16:55.311 "qid": 0, 00:16:55.311 "state": "enabled", 00:16:55.311 "thread": "nvmf_tgt_poll_group_000", 00:16:55.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:55.311 "listen_address": { 00:16:55.311 "trtype": "TCP", 00:16:55.311 "adrfam": "IPv4", 00:16:55.311 "traddr": "10.0.0.2", 00:16:55.311 "trsvcid": "4420" 00:16:55.311 }, 00:16:55.311 "peer_address": { 00:16:55.311 "trtype": "TCP", 00:16:55.311 "adrfam": "IPv4", 00:16:55.311 "traddr": "10.0.0.1", 00:16:55.311 "trsvcid": "41092" 00:16:55.311 }, 00:16:55.311 "auth": { 00:16:55.311 "state": "completed", 00:16:55.311 "digest": "sha256", 00:16:55.311 "dhgroup": "ffdhe4096" 00:16:55.311 } 00:16:55.311 } 00:16:55.311 ]' 00:16:55.311 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.311 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.311 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.311 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:55.311 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.311 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.311 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.311 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.568 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:16:55.568 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:16:56.499 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.499 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:56.499 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.499 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.499 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.499 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.499 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:56.500 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:56.756 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:56.756 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.756 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.756 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:56.756 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:56.756 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.756 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:56.756 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.756 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.756 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.756 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:56.756 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.756 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.014 00:16:57.014 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.014 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.014 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.579 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.579 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.579 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.579 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.579 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.579 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.579 { 00:16:57.579 "cntlid": 31, 00:16:57.579 "qid": 0, 00:16:57.579 "state": "enabled", 00:16:57.579 "thread": "nvmf_tgt_poll_group_000", 00:16:57.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:57.579 "listen_address": { 00:16:57.579 "trtype": "TCP", 00:16:57.579 "adrfam": "IPv4", 00:16:57.579 "traddr": "10.0.0.2", 00:16:57.579 "trsvcid": "4420" 00:16:57.579 }, 00:16:57.579 "peer_address": { 00:16:57.579 "trtype": "TCP", 00:16:57.579 "adrfam": "IPv4", 00:16:57.579 "traddr": "10.0.0.1", 00:16:57.579 "trsvcid": "41116" 00:16:57.579 }, 00:16:57.579 "auth": { 00:16:57.579 "state": "completed", 00:16:57.579 "digest": "sha256", 00:16:57.579 "dhgroup": "ffdhe4096" 00:16:57.579 } 00:16:57.579 } 00:16:57.579 ]' 00:16:57.579 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.579 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.579 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.579 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:57.579 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.579 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.579 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.579 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.838 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:16:57.838 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:16:58.768 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.768 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:58.768 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.768 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.769 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.769 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.769 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.769 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:58.769 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:59.026 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:59.026 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.026 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.026 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:59.026 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:59.026 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.026 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.026 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.026 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.026 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.026 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.026 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.026 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.590 00:16:59.590 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.590 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.590 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.856 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.856 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.856 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.856 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.856 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.856 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.856 { 00:16:59.856 "cntlid": 33, 00:16:59.856 "qid": 0, 00:16:59.856 "state": "enabled", 00:16:59.856 "thread": "nvmf_tgt_poll_group_000", 00:16:59.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:59.856 "listen_address": { 00:16:59.856 "trtype": "TCP", 00:16:59.856 "adrfam": "IPv4", 00:16:59.856 "traddr": "10.0.0.2", 00:16:59.856 "trsvcid": "4420" 00:16:59.856 }, 00:16:59.856 "peer_address": { 00:16:59.856 "trtype": "TCP", 00:16:59.856 "adrfam": "IPv4", 00:16:59.856 "traddr": "10.0.0.1", 00:16:59.856 "trsvcid": "41146" 00:16:59.856 }, 00:16:59.856 "auth": { 00:16:59.856 "state": "completed", 00:16:59.856 "digest": "sha256", 00:16:59.856 "dhgroup": "ffdhe6144" 00:16:59.856 } 00:16:59.856 } 00:16:59.856 ]' 00:16:59.856 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.856 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.856 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.856 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:59.856 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.856 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.856 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.856 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.167 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:17:00.167 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:17:01.123 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.123 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:01.123 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.123 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.123 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.123 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.123 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:01.123 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:01.380 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:01.380 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.380 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:01.380 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:01.380 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:01.380 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.380 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.380 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.380 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.380 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.380 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.380 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.380 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.946 00:17:01.946 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.946 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.946 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.203 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.203 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.203 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.203 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.203 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.203 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.203 { 00:17:02.203 "cntlid": 35, 00:17:02.203 "qid": 0, 00:17:02.203 "state": "enabled", 00:17:02.203 "thread": "nvmf_tgt_poll_group_000", 00:17:02.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:02.203 "listen_address": { 00:17:02.203 "trtype": "TCP", 00:17:02.203 "adrfam": "IPv4", 00:17:02.203 "traddr": "10.0.0.2", 00:17:02.203 "trsvcid": "4420" 00:17:02.203 }, 00:17:02.203 "peer_address": { 00:17:02.203 "trtype": "TCP", 00:17:02.203 "adrfam": "IPv4", 00:17:02.203 "traddr": "10.0.0.1", 00:17:02.203 "trsvcid": "38732" 00:17:02.203 }, 00:17:02.203 "auth": { 00:17:02.203 "state": "completed", 00:17:02.203 "digest": "sha256", 00:17:02.203 "dhgroup": "ffdhe6144" 00:17:02.203 } 00:17:02.203 } 00:17:02.203 ]' 00:17:02.203 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.203 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.203 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.203 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:02.203 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.460 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.460 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.460 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.719 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:17:02.719 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:17:03.653 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.653 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:03.653 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.653 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.653 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.653 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.653 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:03.653 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:03.654 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:03.654 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.654 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.654 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:03.654 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:03.654 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.654 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.654 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.654 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.654 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.654 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.654 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.654 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.218 00:17:04.218 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.218 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.218 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.476 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.476 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.476 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.476 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.476 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.476 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.476 { 00:17:04.476 "cntlid": 37, 00:17:04.476 "qid": 0, 00:17:04.476 "state": "enabled", 00:17:04.476 "thread": "nvmf_tgt_poll_group_000", 00:17:04.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:04.476 "listen_address": { 00:17:04.476 "trtype": "TCP", 00:17:04.476 "adrfam": "IPv4", 00:17:04.476 "traddr": "10.0.0.2", 00:17:04.476 "trsvcid": "4420" 00:17:04.476 }, 00:17:04.476 "peer_address": { 00:17:04.476 "trtype": "TCP", 00:17:04.476 "adrfam": "IPv4", 00:17:04.476 "traddr": "10.0.0.1", 00:17:04.476 "trsvcid": "38744" 00:17:04.476 }, 00:17:04.476 "auth": { 00:17:04.476 "state": "completed", 00:17:04.476 "digest": "sha256", 00:17:04.476 "dhgroup": "ffdhe6144" 00:17:04.476 } 00:17:04.476 } 00:17:04.476 ]' 00:17:04.476 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.733 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.733 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.733 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:04.733 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.733 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.733 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.733 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.993 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:17:04.993 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:17:05.925 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.925 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:05.925 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.925 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.925 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.925 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.925 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:05.925 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:06.183 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:06.183 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.183 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:06.183 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:06.183 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.183 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.183 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:06.183 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.183 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.183 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.183 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.183 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.183 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.749 00:17:06.749 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.749 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.749 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.006 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.006 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.006 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.006 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.006 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.006 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.006 { 00:17:07.006 "cntlid": 39, 00:17:07.006 "qid": 0, 00:17:07.006 "state": "enabled", 00:17:07.006 "thread": "nvmf_tgt_poll_group_000", 00:17:07.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:07.006 "listen_address": { 00:17:07.006 "trtype": "TCP", 00:17:07.006 "adrfam": "IPv4", 00:17:07.006 "traddr": "10.0.0.2", 00:17:07.006 "trsvcid": "4420" 00:17:07.006 }, 00:17:07.006 "peer_address": { 00:17:07.006 "trtype": "TCP", 00:17:07.006 "adrfam": "IPv4", 00:17:07.006 "traddr": "10.0.0.1", 00:17:07.006 "trsvcid": "38776" 00:17:07.006 }, 00:17:07.006 "auth": { 00:17:07.006 "state": "completed", 00:17:07.006 "digest": "sha256", 00:17:07.006 "dhgroup": "ffdhe6144" 00:17:07.006 } 00:17:07.006 } 00:17:07.006 ]' 00:17:07.006 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.006 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.006 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.006 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:07.006 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.006 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.006 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.006 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.572 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:17:07.572 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:17:08.137 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.394 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:08.394 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.394 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.394 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.394 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.394 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.394 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:08.394 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:08.652 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:08.652 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.652 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:08.652 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:08.652 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:08.652 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.652 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.652 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.652 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.652 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.652 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.652 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.652 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.591 00:17:09.591 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.591 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.591 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.591 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.591 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.591 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.591 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.591 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.591 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.591 { 00:17:09.591 "cntlid": 41, 00:17:09.591 "qid": 0, 00:17:09.591 "state": "enabled", 00:17:09.591 "thread": "nvmf_tgt_poll_group_000", 00:17:09.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:09.591 "listen_address": { 00:17:09.591 "trtype": "TCP", 00:17:09.591 "adrfam": "IPv4", 00:17:09.591 "traddr": "10.0.0.2", 00:17:09.591 "trsvcid": "4420" 00:17:09.591 }, 00:17:09.591 "peer_address": { 00:17:09.591 "trtype": "TCP", 00:17:09.591 "adrfam": "IPv4", 00:17:09.591 "traddr": "10.0.0.1", 00:17:09.591 "trsvcid": "38816" 00:17:09.591 }, 00:17:09.591 "auth": { 00:17:09.591 "state": "completed", 00:17:09.591 "digest": "sha256", 00:17:09.591 "dhgroup": "ffdhe8192" 00:17:09.591 } 00:17:09.591 } 00:17:09.591 ]' 00:17:09.591 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.591 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.591 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.850 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:09.850 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.850 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.850 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.850 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.107 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:17:10.107 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:17:11.042 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.042 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:11.042 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.042 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.042 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.042 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.042 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.042 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.301 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:11.301 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.301 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:11.301 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:11.301 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:11.301 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.301 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.301 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.301 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.301 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.301 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.301 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.301 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.237 00:17:12.237 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.237 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.237 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.494 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.494 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.494 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.494 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.494 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.494 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.494 { 00:17:12.494 "cntlid": 43, 00:17:12.494 "qid": 0, 00:17:12.494 "state": "enabled", 00:17:12.494 "thread": "nvmf_tgt_poll_group_000", 00:17:12.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:12.494 "listen_address": { 00:17:12.494 "trtype": "TCP", 00:17:12.494 "adrfam": "IPv4", 00:17:12.494 "traddr": "10.0.0.2", 00:17:12.494 "trsvcid": "4420" 00:17:12.494 }, 00:17:12.494 "peer_address": { 00:17:12.494 "trtype": "TCP", 00:17:12.494 "adrfam": "IPv4", 00:17:12.494 "traddr": "10.0.0.1", 00:17:12.494 "trsvcid": "45134" 00:17:12.494 }, 00:17:12.494 "auth": { 00:17:12.494 "state": "completed", 00:17:12.494 "digest": "sha256", 00:17:12.494 "dhgroup": "ffdhe8192" 00:17:12.494 } 00:17:12.494 } 00:17:12.494 ]' 00:17:12.494 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.494 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.494 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.494 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.494 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.494 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.494 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.494 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.752 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:17:12.752 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:17:13.689 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.689 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:13.689 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.689 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.689 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.689 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.689 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.689 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.947 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:13.947 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.947 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:13.947 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:13.947 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:13.947 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.947 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.947 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.947 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.947 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.947 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.947 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.947 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.882 00:17:14.882 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.882 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.882 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.139 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.139 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.139 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.139 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.139 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.139 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.139 { 00:17:15.139 "cntlid": 45, 00:17:15.139 "qid": 0, 00:17:15.139 "state": "enabled", 00:17:15.139 "thread": "nvmf_tgt_poll_group_000", 00:17:15.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:15.139 "listen_address": { 00:17:15.139 "trtype": "TCP", 00:17:15.139 "adrfam": "IPv4", 00:17:15.139 "traddr": "10.0.0.2", 00:17:15.139 "trsvcid": "4420" 00:17:15.139 }, 00:17:15.139 "peer_address": { 00:17:15.139 "trtype": "TCP", 00:17:15.139 "adrfam": "IPv4", 00:17:15.139 "traddr": "10.0.0.1", 00:17:15.139 "trsvcid": "45150" 00:17:15.139 }, 00:17:15.139 "auth": { 00:17:15.139 "state": "completed", 00:17:15.139 "digest": "sha256", 00:17:15.139 "dhgroup": "ffdhe8192" 00:17:15.139 } 00:17:15.139 } 00:17:15.139 ]' 00:17:15.139 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.139 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.139 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.139 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:15.139 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.139 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.139 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.139 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.397 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:17:15.397 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:17:16.331 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.331 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:16.331 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.331 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.331 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.331 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.331 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:16.331 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:16.589 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:16.589 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.589 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:16.589 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:16.589 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.589 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.589 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:16.589 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.589 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.589 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.589 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.589 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.589 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.523 00:17:17.523 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.523 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.523 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.780 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.780 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.780 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.780 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.780 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.780 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.780 { 00:17:17.780 "cntlid": 47, 00:17:17.780 "qid": 0, 00:17:17.780 "state": "enabled", 00:17:17.780 "thread": "nvmf_tgt_poll_group_000", 00:17:17.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:17.780 "listen_address": { 00:17:17.780 "trtype": "TCP", 00:17:17.780 "adrfam": "IPv4", 00:17:17.780 "traddr": "10.0.0.2", 00:17:17.780 "trsvcid": "4420" 00:17:17.780 }, 00:17:17.780 "peer_address": { 00:17:17.780 "trtype": "TCP", 00:17:17.780 "adrfam": "IPv4", 00:17:17.780 "traddr": "10.0.0.1", 00:17:17.780 "trsvcid": "45180" 00:17:17.780 }, 00:17:17.780 "auth": { 00:17:17.780 "state": "completed", 00:17:17.780 "digest": "sha256", 00:17:17.780 "dhgroup": "ffdhe8192" 00:17:17.780 } 00:17:17.780 } 00:17:17.780 ]' 00:17:17.780 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.780 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.780 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.780 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:17.780 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.037 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.037 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.037 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.296 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:17:18.296 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:17:19.232 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.232 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:19.232 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.232 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.232 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.232 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:19.232 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.232 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.232 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:19.232 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:19.488 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:19.488 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.488 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.488 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:19.488 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:19.489 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.489 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.489 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.489 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.489 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.489 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.489 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.489 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.747 00:17:19.747 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.747 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.747 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.005 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.005 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.005 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.005 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.005 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.005 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.005 { 00:17:20.005 "cntlid": 49, 00:17:20.005 "qid": 0, 00:17:20.005 "state": "enabled", 00:17:20.005 "thread": "nvmf_tgt_poll_group_000", 00:17:20.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:20.005 "listen_address": { 00:17:20.005 "trtype": "TCP", 00:17:20.005 "adrfam": "IPv4", 00:17:20.005 "traddr": "10.0.0.2", 00:17:20.005 "trsvcid": "4420" 00:17:20.005 }, 00:17:20.005 "peer_address": { 00:17:20.005 "trtype": "TCP", 00:17:20.005 "adrfam": "IPv4", 00:17:20.005 "traddr": "10.0.0.1", 00:17:20.005 "trsvcid": "45194" 00:17:20.005 }, 00:17:20.005 "auth": { 00:17:20.005 "state": "completed", 00:17:20.005 "digest": "sha384", 00:17:20.005 "dhgroup": "null" 00:17:20.005 } 00:17:20.005 } 00:17:20.005 ]' 00:17:20.005 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.005 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.005 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.005 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:20.005 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.005 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.005 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.005 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.263 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:17:20.263 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:17:21.199 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.199 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:21.199 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.199 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.199 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.199 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.199 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:21.199 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:21.765 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:21.765 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.765 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.765 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:21.765 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:21.765 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.765 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.765 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.765 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.765 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.765 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.765 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.765 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.023 00:17:22.023 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.023 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.023 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.280 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.280 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.280 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.280 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.280 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.280 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.280 { 00:17:22.280 "cntlid": 51, 00:17:22.280 "qid": 0, 00:17:22.280 "state": "enabled", 00:17:22.280 "thread": "nvmf_tgt_poll_group_000", 00:17:22.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:22.280 "listen_address": { 00:17:22.280 "trtype": "TCP", 00:17:22.280 "adrfam": "IPv4", 00:17:22.280 "traddr": "10.0.0.2", 00:17:22.280 "trsvcid": "4420" 00:17:22.280 }, 00:17:22.280 "peer_address": { 00:17:22.280 "trtype": "TCP", 00:17:22.280 "adrfam": "IPv4", 00:17:22.280 "traddr": "10.0.0.1", 00:17:22.280 "trsvcid": "48806" 00:17:22.280 }, 00:17:22.280 "auth": { 00:17:22.280 "state": "completed", 00:17:22.280 "digest": "sha384", 00:17:22.280 "dhgroup": "null" 00:17:22.280 } 00:17:22.280 } 00:17:22.280 ]' 00:17:22.280 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.280 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.280 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.280 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:22.280 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.280 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.280 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.280 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.538 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:17:22.538 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:17:23.473 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.473 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:23.473 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.473 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.473 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.473 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.473 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:23.473 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:23.730 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:23.730 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.730 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.730 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:23.730 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:23.730 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.730 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.730 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.730 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.730 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.730 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.730 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.730 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.296 00:17:24.296 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.296 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.296 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.554 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.554 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.554 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.554 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.554 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.554 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.554 { 00:17:24.554 "cntlid": 53, 00:17:24.554 "qid": 0, 00:17:24.554 "state": "enabled", 00:17:24.554 "thread": "nvmf_tgt_poll_group_000", 00:17:24.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:24.554 "listen_address": { 00:17:24.554 "trtype": "TCP", 00:17:24.554 "adrfam": "IPv4", 00:17:24.554 "traddr": "10.0.0.2", 00:17:24.554 "trsvcid": "4420" 00:17:24.554 }, 00:17:24.554 "peer_address": { 00:17:24.554 "trtype": "TCP", 00:17:24.554 "adrfam": "IPv4", 00:17:24.554 "traddr": "10.0.0.1", 00:17:24.554 "trsvcid": "48820" 00:17:24.554 }, 00:17:24.554 "auth": { 00:17:24.554 "state": "completed", 00:17:24.554 "digest": "sha384", 00:17:24.554 "dhgroup": "null" 00:17:24.554 } 00:17:24.554 } 00:17:24.554 ]' 00:17:24.554 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.554 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.554 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.554 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:24.554 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.554 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.554 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.554 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.812 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:17:24.812 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:17:25.750 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.750 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:25.750 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.750 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.750 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.750 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.750 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.750 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:26.008 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:26.008 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.008 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.008 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:26.008 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:26.008 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.008 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:26.008 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.008 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.008 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.008 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:26.008 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.008 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.576 00:17:26.576 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.576 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.576 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.576 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.576 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.576 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.576 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.576 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.576 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.576 { 00:17:26.576 "cntlid": 55, 00:17:26.576 "qid": 0, 00:17:26.576 "state": "enabled", 00:17:26.576 "thread": "nvmf_tgt_poll_group_000", 00:17:26.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:26.576 "listen_address": { 00:17:26.576 "trtype": "TCP", 00:17:26.576 "adrfam": "IPv4", 00:17:26.576 "traddr": "10.0.0.2", 00:17:26.576 "trsvcid": "4420" 00:17:26.576 }, 00:17:26.576 "peer_address": { 00:17:26.576 "trtype": "TCP", 00:17:26.576 "adrfam": "IPv4", 00:17:26.576 "traddr": "10.0.0.1", 00:17:26.576 "trsvcid": "48840" 00:17:26.576 }, 00:17:26.576 "auth": { 00:17:26.576 "state": "completed", 00:17:26.576 "digest": "sha384", 00:17:26.576 "dhgroup": "null" 00:17:26.576 } 00:17:26.576 } 00:17:26.576 ]' 00:17:26.832 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.832 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.832 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.832 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:26.832 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.832 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.832 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.832 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.091 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:17:27.091 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:17:28.024 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.024 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:28.024 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.024 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.024 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.024 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.024 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.024 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:28.024 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:28.280 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:28.280 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.280 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.280 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:28.280 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:28.280 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.280 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.280 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.280 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.280 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.280 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.280 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.280 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.847 00:17:28.848 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.848 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.848 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.105 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.105 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.105 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.105 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.105 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.105 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.105 { 00:17:29.105 "cntlid": 57, 00:17:29.105 "qid": 0, 00:17:29.105 "state": "enabled", 00:17:29.105 "thread": "nvmf_tgt_poll_group_000", 00:17:29.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:29.105 "listen_address": { 00:17:29.105 "trtype": "TCP", 00:17:29.105 "adrfam": "IPv4", 00:17:29.105 "traddr": "10.0.0.2", 00:17:29.105 "trsvcid": "4420" 00:17:29.105 }, 00:17:29.105 "peer_address": { 00:17:29.105 "trtype": "TCP", 00:17:29.105 "adrfam": "IPv4", 00:17:29.105 "traddr": "10.0.0.1", 00:17:29.105 "trsvcid": "48866" 00:17:29.105 }, 00:17:29.105 "auth": { 00:17:29.105 "state": "completed", 00:17:29.105 "digest": "sha384", 00:17:29.105 "dhgroup": "ffdhe2048" 00:17:29.105 } 00:17:29.105 } 00:17:29.105 ]' 00:17:29.105 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.105 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.105 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.105 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:29.105 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.105 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.105 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.105 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.365 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:17:29.365 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:17:30.307 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.307 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:30.307 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.307 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.307 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.307 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.307 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:30.307 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:30.651 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:30.651 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.651 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.651 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:30.651 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:30.651 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.651 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.651 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.651 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.651 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.651 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.651 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.651 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.247 00:17:31.247 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.247 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.247 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.247 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.247 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.247 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.247 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.526 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.526 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.526 { 00:17:31.526 "cntlid": 59, 00:17:31.526 "qid": 0, 00:17:31.526 "state": "enabled", 00:17:31.526 "thread": "nvmf_tgt_poll_group_000", 00:17:31.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:31.526 "listen_address": { 00:17:31.526 "trtype": "TCP", 00:17:31.526 "adrfam": "IPv4", 00:17:31.526 "traddr": "10.0.0.2", 00:17:31.526 "trsvcid": "4420" 00:17:31.526 }, 00:17:31.526 "peer_address": { 00:17:31.526 "trtype": "TCP", 00:17:31.526 "adrfam": "IPv4", 00:17:31.526 "traddr": "10.0.0.1", 00:17:31.526 "trsvcid": "56898" 00:17:31.526 }, 00:17:31.526 "auth": { 00:17:31.526 "state": "completed", 00:17:31.526 "digest": "sha384", 00:17:31.526 "dhgroup": "ffdhe2048" 00:17:31.527 } 00:17:31.527 } 00:17:31.527 ]' 00:17:31.527 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.527 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.527 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.527 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:31.527 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.527 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.527 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.527 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.784 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:17:31.784 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:17:32.720 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.720 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:32.720 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.720 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.720 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.720 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.720 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:32.720 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:32.978 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:32.978 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.978 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.978 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:32.978 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:32.978 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.978 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.978 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.978 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.978 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.978 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.978 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.978 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.236 00:17:33.236 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.236 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.236 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.494 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.494 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.494 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.494 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.494 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.494 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.494 { 00:17:33.494 "cntlid": 61, 00:17:33.494 "qid": 0, 00:17:33.494 "state": "enabled", 00:17:33.494 "thread": "nvmf_tgt_poll_group_000", 00:17:33.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:33.494 "listen_address": { 00:17:33.494 "trtype": "TCP", 00:17:33.494 "adrfam": "IPv4", 00:17:33.494 "traddr": "10.0.0.2", 00:17:33.494 "trsvcid": "4420" 00:17:33.494 }, 00:17:33.494 "peer_address": { 00:17:33.494 "trtype": "TCP", 00:17:33.494 "adrfam": "IPv4", 00:17:33.494 "traddr": "10.0.0.1", 00:17:33.494 "trsvcid": "56920" 00:17:33.494 }, 00:17:33.494 "auth": { 00:17:33.494 "state": "completed", 00:17:33.494 "digest": "sha384", 00:17:33.494 "dhgroup": "ffdhe2048" 00:17:33.494 } 00:17:33.494 } 00:17:33.494 ]' 00:17:33.494 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.494 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.494 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.752 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:33.752 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.752 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.752 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.752 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.010 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:17:34.010 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:17:34.950 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.950 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:34.950 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.950 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.950 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.950 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.950 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:34.950 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:35.208 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:35.208 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.208 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.208 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:35.208 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:35.208 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.208 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:35.208 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.208 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.208 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.208 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:35.208 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.208 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.775 00:17:35.775 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.775 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.775 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.033 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.033 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.033 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.033 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.033 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.033 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.033 { 00:17:36.033 "cntlid": 63, 00:17:36.033 "qid": 0, 00:17:36.033 "state": "enabled", 00:17:36.033 "thread": "nvmf_tgt_poll_group_000", 00:17:36.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:36.033 "listen_address": { 00:17:36.033 "trtype": "TCP", 00:17:36.033 "adrfam": "IPv4", 00:17:36.033 "traddr": "10.0.0.2", 00:17:36.033 "trsvcid": "4420" 00:17:36.033 }, 00:17:36.033 "peer_address": { 00:17:36.033 "trtype": "TCP", 00:17:36.033 "adrfam": "IPv4", 00:17:36.033 "traddr": "10.0.0.1", 00:17:36.033 "trsvcid": "56940" 00:17:36.033 }, 00:17:36.033 "auth": { 00:17:36.033 "state": "completed", 00:17:36.033 "digest": "sha384", 00:17:36.033 "dhgroup": "ffdhe2048" 00:17:36.033 } 00:17:36.033 } 00:17:36.033 ]' 00:17:36.033 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.033 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.033 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.033 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:36.033 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.033 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.033 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.033 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.291 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:17:36.291 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:17:37.228 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.228 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:37.228 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.228 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.228 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.228 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.228 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.228 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:37.228 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:37.486 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:37.486 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.486 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.486 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:37.486 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:37.486 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.486 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.486 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.486 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.486 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.486 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.486 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.486 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.744 00:17:38.001 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.001 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.001 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.258 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.258 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.258 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.258 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.258 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.258 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.258 { 00:17:38.258 "cntlid": 65, 00:17:38.258 "qid": 0, 00:17:38.258 "state": "enabled", 00:17:38.258 "thread": "nvmf_tgt_poll_group_000", 00:17:38.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:38.258 "listen_address": { 00:17:38.258 "trtype": "TCP", 00:17:38.258 "adrfam": "IPv4", 00:17:38.258 "traddr": "10.0.0.2", 00:17:38.258 "trsvcid": "4420" 00:17:38.258 }, 00:17:38.258 "peer_address": { 00:17:38.258 "trtype": "TCP", 00:17:38.258 "adrfam": "IPv4", 00:17:38.258 "traddr": "10.0.0.1", 00:17:38.258 "trsvcid": "56952" 00:17:38.258 }, 00:17:38.258 "auth": { 00:17:38.258 "state": "completed", 00:17:38.258 "digest": "sha384", 00:17:38.258 "dhgroup": "ffdhe3072" 00:17:38.258 } 00:17:38.258 } 00:17:38.258 ]' 00:17:38.258 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.258 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.258 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.258 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:38.258 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.258 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.258 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.258 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.517 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:17:38.517 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:17:39.451 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.451 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:39.451 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.451 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.451 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.451 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.451 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.451 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.707 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:39.707 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.707 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.707 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:39.708 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.708 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.708 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.708 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.708 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.708 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.708 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.708 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.708 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.275 00:17:40.275 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.275 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.275 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.532 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.532 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.532 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.532 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.532 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.532 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.532 { 00:17:40.532 "cntlid": 67, 00:17:40.532 "qid": 0, 00:17:40.532 "state": "enabled", 00:17:40.532 "thread": "nvmf_tgt_poll_group_000", 00:17:40.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:40.532 "listen_address": { 00:17:40.532 "trtype": "TCP", 00:17:40.532 "adrfam": "IPv4", 00:17:40.532 "traddr": "10.0.0.2", 00:17:40.532 "trsvcid": "4420" 00:17:40.532 }, 00:17:40.532 "peer_address": { 00:17:40.532 "trtype": "TCP", 00:17:40.532 "adrfam": "IPv4", 00:17:40.532 "traddr": "10.0.0.1", 00:17:40.532 "trsvcid": "41324" 00:17:40.532 }, 00:17:40.532 "auth": { 00:17:40.532 "state": "completed", 00:17:40.532 "digest": "sha384", 00:17:40.532 "dhgroup": "ffdhe3072" 00:17:40.532 } 00:17:40.532 } 00:17:40.532 ]' 00:17:40.532 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.533 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.533 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.533 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:40.533 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.533 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.533 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.533 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.791 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:17:40.791 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:17:41.727 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.727 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:41.727 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.727 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.727 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.727 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.727 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:41.727 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:42.291 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:42.291 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.291 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.291 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:42.291 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:42.291 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.291 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.291 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.291 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.291 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.291 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.291 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.291 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.548 00:17:42.548 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.548 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.548 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.805 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.805 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.805 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.805 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.805 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.805 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.805 { 00:17:42.805 "cntlid": 69, 00:17:42.805 "qid": 0, 00:17:42.805 "state": "enabled", 00:17:42.805 "thread": "nvmf_tgt_poll_group_000", 00:17:42.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:42.805 "listen_address": { 00:17:42.805 "trtype": "TCP", 00:17:42.805 "adrfam": "IPv4", 00:17:42.805 "traddr": "10.0.0.2", 00:17:42.805 "trsvcid": "4420" 00:17:42.805 }, 00:17:42.805 "peer_address": { 00:17:42.805 "trtype": "TCP", 00:17:42.805 "adrfam": "IPv4", 00:17:42.805 "traddr": "10.0.0.1", 00:17:42.805 "trsvcid": "41368" 00:17:42.805 }, 00:17:42.805 "auth": { 00:17:42.805 "state": "completed", 00:17:42.805 "digest": "sha384", 00:17:42.805 "dhgroup": "ffdhe3072" 00:17:42.805 } 00:17:42.805 } 00:17:42.805 ]' 00:17:42.805 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.805 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.805 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.805 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:42.805 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.805 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.805 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.805 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.373 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:17:43.373 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.308 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.877 00:17:44.877 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.877 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.877 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.136 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.136 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.136 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.136 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.136 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.136 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.136 { 00:17:45.136 "cntlid": 71, 00:17:45.136 "qid": 0, 00:17:45.136 "state": "enabled", 00:17:45.136 "thread": "nvmf_tgt_poll_group_000", 00:17:45.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:45.136 "listen_address": { 00:17:45.136 "trtype": "TCP", 00:17:45.136 "adrfam": "IPv4", 00:17:45.136 "traddr": "10.0.0.2", 00:17:45.136 "trsvcid": "4420" 00:17:45.136 }, 00:17:45.136 "peer_address": { 00:17:45.136 "trtype": "TCP", 00:17:45.136 "adrfam": "IPv4", 00:17:45.136 "traddr": "10.0.0.1", 00:17:45.136 "trsvcid": "41400" 00:17:45.136 }, 00:17:45.136 "auth": { 00:17:45.136 "state": "completed", 00:17:45.136 "digest": "sha384", 00:17:45.136 "dhgroup": "ffdhe3072" 00:17:45.136 } 00:17:45.136 } 00:17:45.136 ]' 00:17:45.136 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.136 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.136 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.136 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:45.136 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.136 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.136 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.136 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.394 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:17:45.394 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:17:46.330 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.330 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:46.330 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.330 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.330 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.330 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.330 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.330 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:46.330 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:46.587 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:46.587 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.587 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:46.587 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:46.587 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:46.587 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.587 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.587 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.587 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.587 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.587 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.588 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.588 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.152 00:17:47.152 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.152 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.152 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.410 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.410 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.410 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.410 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.410 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.410 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.410 { 00:17:47.410 "cntlid": 73, 00:17:47.410 "qid": 0, 00:17:47.410 "state": "enabled", 00:17:47.410 "thread": "nvmf_tgt_poll_group_000", 00:17:47.410 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:47.410 "listen_address": { 00:17:47.410 "trtype": "TCP", 00:17:47.410 "adrfam": "IPv4", 00:17:47.410 "traddr": "10.0.0.2", 00:17:47.410 "trsvcid": "4420" 00:17:47.410 }, 00:17:47.410 "peer_address": { 00:17:47.410 "trtype": "TCP", 00:17:47.410 "adrfam": "IPv4", 00:17:47.410 "traddr": "10.0.0.1", 00:17:47.410 "trsvcid": "41440" 00:17:47.410 }, 00:17:47.410 "auth": { 00:17:47.410 "state": "completed", 00:17:47.410 "digest": "sha384", 00:17:47.410 "dhgroup": "ffdhe4096" 00:17:47.410 } 00:17:47.410 } 00:17:47.410 ]' 00:17:47.410 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.410 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.410 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.410 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:47.410 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.410 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.410 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.410 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.668 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:17:47.668 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:17:48.605 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.605 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:48.605 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.605 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.605 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.605 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.605 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:48.605 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:48.864 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:48.864 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.864 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:48.864 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:48.864 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:48.864 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.864 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.864 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.864 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.864 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.864 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.864 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.864 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.430 00:17:49.430 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.430 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.430 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.689 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.689 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.689 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.689 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.689 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.689 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.689 { 00:17:49.689 "cntlid": 75, 00:17:49.689 "qid": 0, 00:17:49.689 "state": "enabled", 00:17:49.689 "thread": "nvmf_tgt_poll_group_000", 00:17:49.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:49.689 "listen_address": { 00:17:49.689 "trtype": "TCP", 00:17:49.689 "adrfam": "IPv4", 00:17:49.689 "traddr": "10.0.0.2", 00:17:49.689 "trsvcid": "4420" 00:17:49.689 }, 00:17:49.689 "peer_address": { 00:17:49.689 "trtype": "TCP", 00:17:49.689 "adrfam": "IPv4", 00:17:49.689 "traddr": "10.0.0.1", 00:17:49.689 "trsvcid": "41468" 00:17:49.689 }, 00:17:49.689 "auth": { 00:17:49.689 "state": "completed", 00:17:49.689 "digest": "sha384", 00:17:49.689 "dhgroup": "ffdhe4096" 00:17:49.689 } 00:17:49.689 } 00:17:49.689 ]' 00:17:49.689 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.689 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.689 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.689 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:49.689 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.689 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.689 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.689 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.946 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:17:49.946 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:17:50.884 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.884 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:50.884 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.884 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.884 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.884 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.884 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:50.884 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:51.142 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:51.142 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.142 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:51.142 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:51.142 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:51.142 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.142 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.142 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.142 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.142 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.142 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.142 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.142 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.765 00:17:51.765 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.765 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.765 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.765 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.765 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.765 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.765 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.765 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.765 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.765 { 00:17:51.765 "cntlid": 77, 00:17:51.765 "qid": 0, 00:17:51.765 "state": "enabled", 00:17:51.765 "thread": "nvmf_tgt_poll_group_000", 00:17:51.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:51.765 "listen_address": { 00:17:51.765 "trtype": "TCP", 00:17:51.765 "adrfam": "IPv4", 00:17:51.765 "traddr": "10.0.0.2", 00:17:51.765 "trsvcid": "4420" 00:17:51.765 }, 00:17:51.765 "peer_address": { 00:17:51.765 "trtype": "TCP", 00:17:51.765 "adrfam": "IPv4", 00:17:51.765 "traddr": "10.0.0.1", 00:17:51.765 "trsvcid": "44086" 00:17:51.765 }, 00:17:51.765 "auth": { 00:17:51.765 "state": "completed", 00:17:51.765 "digest": "sha384", 00:17:51.765 "dhgroup": "ffdhe4096" 00:17:51.765 } 00:17:51.765 } 00:17:51.765 ]' 00:17:51.765 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.765 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.765 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.023 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:52.023 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.023 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.023 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.023 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.281 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:17:52.281 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:17:53.213 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.213 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:53.213 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.213 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.213 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.213 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.213 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:53.213 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:53.470 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:53.470 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.470 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:53.470 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:53.470 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:53.470 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.470 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:53.470 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.470 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.470 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.470 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:53.470 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.470 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.727 00:17:53.727 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.727 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.727 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.984 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.984 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.984 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.984 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.984 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.984 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.984 { 00:17:53.984 "cntlid": 79, 00:17:53.984 "qid": 0, 00:17:53.984 "state": "enabled", 00:17:53.984 "thread": "nvmf_tgt_poll_group_000", 00:17:53.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:53.984 "listen_address": { 00:17:53.984 "trtype": "TCP", 00:17:53.984 "adrfam": "IPv4", 00:17:53.984 "traddr": "10.0.0.2", 00:17:53.984 "trsvcid": "4420" 00:17:53.984 }, 00:17:53.984 "peer_address": { 00:17:53.984 "trtype": "TCP", 00:17:53.984 "adrfam": "IPv4", 00:17:53.984 "traddr": "10.0.0.1", 00:17:53.984 "trsvcid": "44098" 00:17:53.984 }, 00:17:53.984 "auth": { 00:17:53.984 "state": "completed", 00:17:53.984 "digest": "sha384", 00:17:53.984 "dhgroup": "ffdhe4096" 00:17:53.984 } 00:17:53.984 } 00:17:53.984 ]' 00:17:53.984 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.240 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.240 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.240 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:54.240 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.240 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.240 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.240 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.498 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:17:54.498 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:17:55.437 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.437 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:55.437 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.437 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.437 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.437 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.437 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.437 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:55.437 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:55.694 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:55.694 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.694 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:55.695 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:55.695 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:55.695 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.695 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.695 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.695 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.695 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.695 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.695 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.695 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.262 00:17:56.262 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.262 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.262 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.520 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.520 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.520 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.520 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.520 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.520 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.520 { 00:17:56.520 "cntlid": 81, 00:17:56.520 "qid": 0, 00:17:56.520 "state": "enabled", 00:17:56.520 "thread": "nvmf_tgt_poll_group_000", 00:17:56.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:56.520 "listen_address": { 00:17:56.520 "trtype": "TCP", 00:17:56.520 "adrfam": "IPv4", 00:17:56.520 "traddr": "10.0.0.2", 00:17:56.520 "trsvcid": "4420" 00:17:56.520 }, 00:17:56.520 "peer_address": { 00:17:56.520 "trtype": "TCP", 00:17:56.520 "adrfam": "IPv4", 00:17:56.520 "traddr": "10.0.0.1", 00:17:56.520 "trsvcid": "44122" 00:17:56.520 }, 00:17:56.520 "auth": { 00:17:56.520 "state": "completed", 00:17:56.520 "digest": "sha384", 00:17:56.520 "dhgroup": "ffdhe6144" 00:17:56.520 } 00:17:56.520 } 00:17:56.520 ]' 00:17:56.520 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.521 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.521 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.521 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:56.521 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.521 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.521 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.521 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.089 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:17:57.089 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:17:58.023 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.023 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:58.023 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.023 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.023 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.023 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.024 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:58.024 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:58.282 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:58.282 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.282 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:58.282 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:58.282 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:58.282 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.282 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.282 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.282 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.282 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.282 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.282 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.282 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.852 00:17:58.852 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.852 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.852 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.110 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.110 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.110 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.110 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.110 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.110 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.110 { 00:17:59.110 "cntlid": 83, 00:17:59.110 "qid": 0, 00:17:59.110 "state": "enabled", 00:17:59.110 "thread": "nvmf_tgt_poll_group_000", 00:17:59.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:59.110 "listen_address": { 00:17:59.110 "trtype": "TCP", 00:17:59.110 "adrfam": "IPv4", 00:17:59.110 "traddr": "10.0.0.2", 00:17:59.110 "trsvcid": "4420" 00:17:59.110 }, 00:17:59.110 "peer_address": { 00:17:59.110 "trtype": "TCP", 00:17:59.110 "adrfam": "IPv4", 00:17:59.110 "traddr": "10.0.0.1", 00:17:59.110 "trsvcid": "44154" 00:17:59.110 }, 00:17:59.110 "auth": { 00:17:59.110 "state": "completed", 00:17:59.110 "digest": "sha384", 00:17:59.110 "dhgroup": "ffdhe6144" 00:17:59.110 } 00:17:59.110 } 00:17:59.110 ]' 00:17:59.110 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.110 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.110 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.110 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:59.110 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.110 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.110 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.110 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.680 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:17:59.680 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:18:00.617 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.617 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:00.617 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.617 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.617 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.617 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.617 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:00.617 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:00.911 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:00.911 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.911 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:00.911 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:00.911 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.911 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.911 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.911 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.911 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.911 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.911 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.911 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.911 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.201 00:18:01.201 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.201 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.201 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.460 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.460 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.460 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.460 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.460 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.460 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.460 { 00:18:01.460 "cntlid": 85, 00:18:01.460 "qid": 0, 00:18:01.460 "state": "enabled", 00:18:01.460 "thread": "nvmf_tgt_poll_group_000", 00:18:01.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:01.460 "listen_address": { 00:18:01.460 "trtype": "TCP", 00:18:01.460 "adrfam": "IPv4", 00:18:01.460 "traddr": "10.0.0.2", 00:18:01.460 "trsvcid": "4420" 00:18:01.460 }, 00:18:01.460 "peer_address": { 00:18:01.460 "trtype": "TCP", 00:18:01.460 "adrfam": "IPv4", 00:18:01.460 "traddr": "10.0.0.1", 00:18:01.460 "trsvcid": "58660" 00:18:01.460 }, 00:18:01.460 "auth": { 00:18:01.460 "state": "completed", 00:18:01.460 "digest": "sha384", 00:18:01.460 "dhgroup": "ffdhe6144" 00:18:01.460 } 00:18:01.460 } 00:18:01.460 ]' 00:18:01.718 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.718 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.718 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.718 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:01.718 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.718 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.718 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.718 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.976 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:18:01.976 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:18:02.910 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.910 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:02.910 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.910 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.910 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.910 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.910 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:02.910 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:03.168 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:03.168 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.168 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:03.168 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:03.168 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:03.168 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.168 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:03.168 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.168 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.168 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.168 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:03.168 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.168 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.733 00:18:03.733 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.733 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.733 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.991 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.991 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.991 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.991 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.247 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.247 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.247 { 00:18:04.247 "cntlid": 87, 00:18:04.247 "qid": 0, 00:18:04.247 "state": "enabled", 00:18:04.247 "thread": "nvmf_tgt_poll_group_000", 00:18:04.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:04.247 "listen_address": { 00:18:04.247 "trtype": "TCP", 00:18:04.247 "adrfam": "IPv4", 00:18:04.247 "traddr": "10.0.0.2", 00:18:04.247 "trsvcid": "4420" 00:18:04.247 }, 00:18:04.247 "peer_address": { 00:18:04.247 "trtype": "TCP", 00:18:04.247 "adrfam": "IPv4", 00:18:04.247 "traddr": "10.0.0.1", 00:18:04.247 "trsvcid": "58680" 00:18:04.247 }, 00:18:04.247 "auth": { 00:18:04.247 "state": "completed", 00:18:04.247 "digest": "sha384", 00:18:04.247 "dhgroup": "ffdhe6144" 00:18:04.247 } 00:18:04.247 } 00:18:04.247 ]' 00:18:04.247 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.247 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.247 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.248 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.248 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.248 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.248 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.248 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.504 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:18:04.504 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:18:05.434 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.434 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:05.434 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.434 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.434 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.434 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.434 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.434 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:05.434 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:05.691 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:05.691 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.691 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:05.691 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:05.691 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:05.691 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.691 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.691 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.691 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.691 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.691 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.691 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.691 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.622 00:18:06.622 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.622 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.622 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.880 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.880 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.880 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.880 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.880 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.880 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.880 { 00:18:06.880 "cntlid": 89, 00:18:06.880 "qid": 0, 00:18:06.880 "state": "enabled", 00:18:06.880 "thread": "nvmf_tgt_poll_group_000", 00:18:06.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:06.880 "listen_address": { 00:18:06.880 "trtype": "TCP", 00:18:06.880 "adrfam": "IPv4", 00:18:06.880 "traddr": "10.0.0.2", 00:18:06.880 "trsvcid": "4420" 00:18:06.880 }, 00:18:06.880 "peer_address": { 00:18:06.880 "trtype": "TCP", 00:18:06.880 "adrfam": "IPv4", 00:18:06.880 "traddr": "10.0.0.1", 00:18:06.880 "trsvcid": "58714" 00:18:06.880 }, 00:18:06.880 "auth": { 00:18:06.880 "state": "completed", 00:18:06.880 "digest": "sha384", 00:18:06.880 "dhgroup": "ffdhe8192" 00:18:06.880 } 00:18:06.880 } 00:18:06.880 ]' 00:18:06.880 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.880 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.880 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.880 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:06.880 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.880 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.880 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.880 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.138 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:18:07.138 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:18:08.073 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.073 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:08.073 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.073 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.073 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.073 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.073 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.073 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.330 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:08.330 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.330 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:08.330 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:08.330 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:08.330 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.330 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.330 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.330 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.330 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.330 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.330 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.330 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.267 00:18:09.267 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.267 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.267 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.526 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.526 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.526 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.526 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.526 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.526 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.526 { 00:18:09.526 "cntlid": 91, 00:18:09.526 "qid": 0, 00:18:09.526 "state": "enabled", 00:18:09.526 "thread": "nvmf_tgt_poll_group_000", 00:18:09.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:09.526 "listen_address": { 00:18:09.526 "trtype": "TCP", 00:18:09.526 "adrfam": "IPv4", 00:18:09.526 "traddr": "10.0.0.2", 00:18:09.526 "trsvcid": "4420" 00:18:09.526 }, 00:18:09.526 "peer_address": { 00:18:09.526 "trtype": "TCP", 00:18:09.526 "adrfam": "IPv4", 00:18:09.526 "traddr": "10.0.0.1", 00:18:09.526 "trsvcid": "58750" 00:18:09.526 }, 00:18:09.526 "auth": { 00:18:09.526 "state": "completed", 00:18:09.526 "digest": "sha384", 00:18:09.526 "dhgroup": "ffdhe8192" 00:18:09.526 } 00:18:09.526 } 00:18:09.526 ]' 00:18:09.526 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.526 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.526 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.526 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.526 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.783 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.783 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.783 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.043 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:18:10.043 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:18:10.978 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.978 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:10.978 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.978 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.978 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.978 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.978 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:10.979 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.236 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:11.236 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.236 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:11.236 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:11.236 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:11.236 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.236 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.237 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.237 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.237 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.237 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.237 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.237 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.173 00:18:12.173 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.173 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.173 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.173 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.173 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.173 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.173 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.173 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.173 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.173 { 00:18:12.173 "cntlid": 93, 00:18:12.173 "qid": 0, 00:18:12.173 "state": "enabled", 00:18:12.173 "thread": "nvmf_tgt_poll_group_000", 00:18:12.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:12.173 "listen_address": { 00:18:12.173 "trtype": "TCP", 00:18:12.173 "adrfam": "IPv4", 00:18:12.173 "traddr": "10.0.0.2", 00:18:12.173 "trsvcid": "4420" 00:18:12.173 }, 00:18:12.173 "peer_address": { 00:18:12.173 "trtype": "TCP", 00:18:12.173 "adrfam": "IPv4", 00:18:12.173 "traddr": "10.0.0.1", 00:18:12.173 "trsvcid": "58908" 00:18:12.173 }, 00:18:12.173 "auth": { 00:18:12.173 "state": "completed", 00:18:12.173 "digest": "sha384", 00:18:12.173 "dhgroup": "ffdhe8192" 00:18:12.173 } 00:18:12.173 } 00:18:12.173 ]' 00:18:12.173 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.173 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.173 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.432 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.432 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.432 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.432 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.432 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.690 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:18:12.690 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:18:13.630 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.630 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:13.630 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.630 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.630 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.630 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.630 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:13.630 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:13.888 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:13.888 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.888 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:13.888 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:13.888 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:13.888 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.888 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:13.888 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.888 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.888 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.888 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:13.888 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:13.888 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.823 00:18:14.823 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.823 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.823 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.082 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.082 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.082 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.082 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.082 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.082 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.082 { 00:18:15.082 "cntlid": 95, 00:18:15.082 "qid": 0, 00:18:15.082 "state": "enabled", 00:18:15.082 "thread": "nvmf_tgt_poll_group_000", 00:18:15.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:15.082 "listen_address": { 00:18:15.082 "trtype": "TCP", 00:18:15.082 "adrfam": "IPv4", 00:18:15.082 "traddr": "10.0.0.2", 00:18:15.082 "trsvcid": "4420" 00:18:15.082 }, 00:18:15.082 "peer_address": { 00:18:15.082 "trtype": "TCP", 00:18:15.082 "adrfam": "IPv4", 00:18:15.082 "traddr": "10.0.0.1", 00:18:15.082 "trsvcid": "58952" 00:18:15.082 }, 00:18:15.082 "auth": { 00:18:15.082 "state": "completed", 00:18:15.082 "digest": "sha384", 00:18:15.082 "dhgroup": "ffdhe8192" 00:18:15.082 } 00:18:15.082 } 00:18:15.082 ]' 00:18:15.082 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.082 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.082 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.082 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.082 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.082 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.082 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.082 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.340 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:18:15.340 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:18:16.286 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.286 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:16.286 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.286 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.286 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.286 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:16.286 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.286 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.286 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:16.286 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:16.853 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:16.853 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.853 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.853 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:16.854 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:16.854 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.854 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.854 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.854 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.854 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.854 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.854 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.854 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.112 00:18:17.112 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.112 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.112 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.369 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.369 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.369 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.369 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.369 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.369 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.369 { 00:18:17.369 "cntlid": 97, 00:18:17.370 "qid": 0, 00:18:17.370 "state": "enabled", 00:18:17.370 "thread": "nvmf_tgt_poll_group_000", 00:18:17.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:17.370 "listen_address": { 00:18:17.370 "trtype": "TCP", 00:18:17.370 "adrfam": "IPv4", 00:18:17.370 "traddr": "10.0.0.2", 00:18:17.370 "trsvcid": "4420" 00:18:17.370 }, 00:18:17.370 "peer_address": { 00:18:17.370 "trtype": "TCP", 00:18:17.370 "adrfam": "IPv4", 00:18:17.370 "traddr": "10.0.0.1", 00:18:17.370 "trsvcid": "58984" 00:18:17.370 }, 00:18:17.370 "auth": { 00:18:17.370 "state": "completed", 00:18:17.370 "digest": "sha512", 00:18:17.370 "dhgroup": "null" 00:18:17.370 } 00:18:17.370 } 00:18:17.370 ]' 00:18:17.370 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.370 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.370 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.370 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:17.370 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.370 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.370 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.370 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.939 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:18:17.939 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:18:18.873 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.873 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:18.873 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.873 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.874 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.874 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.874 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:18.874 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:19.132 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:19.132 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.132 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.132 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:19.132 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:19.132 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.132 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.132 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.132 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.132 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.132 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.132 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.132 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.389 00:18:19.389 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.389 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.389 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.646 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.646 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.646 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.646 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.646 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.647 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.647 { 00:18:19.647 "cntlid": 99, 00:18:19.647 "qid": 0, 00:18:19.647 "state": "enabled", 00:18:19.647 "thread": "nvmf_tgt_poll_group_000", 00:18:19.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:19.647 "listen_address": { 00:18:19.647 "trtype": "TCP", 00:18:19.647 "adrfam": "IPv4", 00:18:19.647 "traddr": "10.0.0.2", 00:18:19.647 "trsvcid": "4420" 00:18:19.647 }, 00:18:19.647 "peer_address": { 00:18:19.647 "trtype": "TCP", 00:18:19.647 "adrfam": "IPv4", 00:18:19.647 "traddr": "10.0.0.1", 00:18:19.647 "trsvcid": "59014" 00:18:19.647 }, 00:18:19.647 "auth": { 00:18:19.647 "state": "completed", 00:18:19.647 "digest": "sha512", 00:18:19.647 "dhgroup": "null" 00:18:19.647 } 00:18:19.647 } 00:18:19.647 ]' 00:18:19.647 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.647 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.647 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.905 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:19.905 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.905 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.905 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.905 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.180 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:18:20.180 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:18:21.113 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.113 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:21.113 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.113 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.113 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.113 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.113 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:21.113 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:21.371 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:21.371 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.371 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:21.371 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:21.371 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:21.371 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.371 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.371 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.371 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.371 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.371 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.371 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.371 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.629 00:18:21.886 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.886 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.886 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.145 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.145 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.145 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.145 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.145 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.145 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.145 { 00:18:22.145 "cntlid": 101, 00:18:22.145 "qid": 0, 00:18:22.145 "state": "enabled", 00:18:22.145 "thread": "nvmf_tgt_poll_group_000", 00:18:22.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:22.145 "listen_address": { 00:18:22.145 "trtype": "TCP", 00:18:22.145 "adrfam": "IPv4", 00:18:22.145 "traddr": "10.0.0.2", 00:18:22.145 "trsvcid": "4420" 00:18:22.145 }, 00:18:22.145 "peer_address": { 00:18:22.145 "trtype": "TCP", 00:18:22.145 "adrfam": "IPv4", 00:18:22.145 "traddr": "10.0.0.1", 00:18:22.145 "trsvcid": "39218" 00:18:22.145 }, 00:18:22.145 "auth": { 00:18:22.145 "state": "completed", 00:18:22.145 "digest": "sha512", 00:18:22.145 "dhgroup": "null" 00:18:22.145 } 00:18:22.145 } 00:18:22.145 ]' 00:18:22.145 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.145 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.145 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.145 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:22.145 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.145 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.145 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.145 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.404 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:18:22.404 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:18:23.341 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.341 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:23.341 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.341 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.341 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.341 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.341 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:23.341 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:23.598 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:23.598 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.598 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.598 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:23.598 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:23.598 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.598 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:23.598 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.598 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.598 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.598 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:23.598 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.599 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.857 00:18:23.857 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.857 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.857 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.431 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.431 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.431 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.432 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.432 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.432 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.432 { 00:18:24.432 "cntlid": 103, 00:18:24.432 "qid": 0, 00:18:24.432 "state": "enabled", 00:18:24.432 "thread": "nvmf_tgt_poll_group_000", 00:18:24.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:24.432 "listen_address": { 00:18:24.432 "trtype": "TCP", 00:18:24.432 "adrfam": "IPv4", 00:18:24.432 "traddr": "10.0.0.2", 00:18:24.432 "trsvcid": "4420" 00:18:24.432 }, 00:18:24.432 "peer_address": { 00:18:24.432 "trtype": "TCP", 00:18:24.432 "adrfam": "IPv4", 00:18:24.432 "traddr": "10.0.0.1", 00:18:24.432 "trsvcid": "39248" 00:18:24.432 }, 00:18:24.432 "auth": { 00:18:24.432 "state": "completed", 00:18:24.432 "digest": "sha512", 00:18:24.432 "dhgroup": "null" 00:18:24.432 } 00:18:24.432 } 00:18:24.432 ]' 00:18:24.432 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.432 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.432 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.432 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:24.432 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.432 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.432 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.432 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:18:24.690 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:18:25.624 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.624 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:25.624 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.624 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.624 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.624 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.624 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.624 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:25.624 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:25.882 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:25.882 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.882 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:25.882 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:25.882 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:25.882 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.882 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.882 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.882 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.882 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.882 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.882 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.882 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.140 00:18:26.140 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.140 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.140 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.398 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.398 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.398 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.398 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.398 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.398 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.398 { 00:18:26.398 "cntlid": 105, 00:18:26.398 "qid": 0, 00:18:26.398 "state": "enabled", 00:18:26.398 "thread": "nvmf_tgt_poll_group_000", 00:18:26.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:26.398 "listen_address": { 00:18:26.398 "trtype": "TCP", 00:18:26.398 "adrfam": "IPv4", 00:18:26.398 "traddr": "10.0.0.2", 00:18:26.398 "trsvcid": "4420" 00:18:26.398 }, 00:18:26.398 "peer_address": { 00:18:26.398 "trtype": "TCP", 00:18:26.398 "adrfam": "IPv4", 00:18:26.398 "traddr": "10.0.0.1", 00:18:26.398 "trsvcid": "39278" 00:18:26.398 }, 00:18:26.398 "auth": { 00:18:26.398 "state": "completed", 00:18:26.398 "digest": "sha512", 00:18:26.398 "dhgroup": "ffdhe2048" 00:18:26.398 } 00:18:26.398 } 00:18:26.398 ]' 00:18:26.398 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.398 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.398 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.398 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:26.398 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.657 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.657 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.657 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.917 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:18:26.917 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:18:27.854 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.854 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:27.854 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.854 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.854 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.854 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.854 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:27.854 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:28.112 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:28.112 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.112 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.112 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:28.112 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:28.112 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.112 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.112 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.112 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.112 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.112 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.112 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.112 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.371 00:18:28.371 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.371 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.371 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.629 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.629 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.629 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.629 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.629 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.629 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.629 { 00:18:28.629 "cntlid": 107, 00:18:28.629 "qid": 0, 00:18:28.629 "state": "enabled", 00:18:28.629 "thread": "nvmf_tgt_poll_group_000", 00:18:28.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:28.629 "listen_address": { 00:18:28.629 "trtype": "TCP", 00:18:28.629 "adrfam": "IPv4", 00:18:28.629 "traddr": "10.0.0.2", 00:18:28.629 "trsvcid": "4420" 00:18:28.629 }, 00:18:28.629 "peer_address": { 00:18:28.629 "trtype": "TCP", 00:18:28.629 "adrfam": "IPv4", 00:18:28.629 "traddr": "10.0.0.1", 00:18:28.629 "trsvcid": "39314" 00:18:28.629 }, 00:18:28.629 "auth": { 00:18:28.629 "state": "completed", 00:18:28.629 "digest": "sha512", 00:18:28.629 "dhgroup": "ffdhe2048" 00:18:28.629 } 00:18:28.629 } 00:18:28.629 ]' 00:18:28.629 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.629 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.629 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.629 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:28.629 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.888 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.888 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.888 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.146 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:18:29.146 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:18:30.079 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.080 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:30.080 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.080 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.080 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.080 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.080 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:30.080 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:30.337 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:30.337 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.337 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:30.337 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:30.337 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:30.337 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.337 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.337 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.337 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.337 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.337 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.337 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.337 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.595 00:18:30.596 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.596 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.596 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.874 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.874 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.874 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.874 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.874 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.874 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.874 { 00:18:30.874 "cntlid": 109, 00:18:30.874 "qid": 0, 00:18:30.874 "state": "enabled", 00:18:30.874 "thread": "nvmf_tgt_poll_group_000", 00:18:30.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:30.874 "listen_address": { 00:18:30.874 "trtype": "TCP", 00:18:30.874 "adrfam": "IPv4", 00:18:30.874 "traddr": "10.0.0.2", 00:18:30.874 "trsvcid": "4420" 00:18:30.874 }, 00:18:30.874 "peer_address": { 00:18:30.874 "trtype": "TCP", 00:18:30.874 "adrfam": "IPv4", 00:18:30.874 "traddr": "10.0.0.1", 00:18:30.874 "trsvcid": "48390" 00:18:30.874 }, 00:18:30.874 "auth": { 00:18:30.874 "state": "completed", 00:18:30.874 "digest": "sha512", 00:18:30.874 "dhgroup": "ffdhe2048" 00:18:30.874 } 00:18:30.874 } 00:18:30.874 ]' 00:18:30.874 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.874 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.874 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.874 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:30.874 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.182 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.182 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.182 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.444 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:18:31.444 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:18:32.441 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.441 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:32.441 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.441 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.441 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.442 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.442 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:32.442 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:32.442 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:32.442 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.442 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:32.442 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:32.442 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:32.442 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.442 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:32.442 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.442 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.442 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.442 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:32.442 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:32.442 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.008 00:18:33.008 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.008 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.008 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.266 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.266 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.266 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.266 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.266 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.266 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.266 { 00:18:33.266 "cntlid": 111, 00:18:33.266 "qid": 0, 00:18:33.266 "state": "enabled", 00:18:33.266 "thread": "nvmf_tgt_poll_group_000", 00:18:33.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:33.266 "listen_address": { 00:18:33.266 "trtype": "TCP", 00:18:33.266 "adrfam": "IPv4", 00:18:33.266 "traddr": "10.0.0.2", 00:18:33.266 "trsvcid": "4420" 00:18:33.266 }, 00:18:33.266 "peer_address": { 00:18:33.266 "trtype": "TCP", 00:18:33.266 "adrfam": "IPv4", 00:18:33.266 "traddr": "10.0.0.1", 00:18:33.266 "trsvcid": "48396" 00:18:33.266 }, 00:18:33.266 "auth": { 00:18:33.266 "state": "completed", 00:18:33.266 "digest": "sha512", 00:18:33.266 "dhgroup": "ffdhe2048" 00:18:33.266 } 00:18:33.266 } 00:18:33.266 ]' 00:18:33.266 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.266 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.266 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.266 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:33.266 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.266 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.266 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.266 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.525 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:18:33.525 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:18:34.456 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.457 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:34.457 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.457 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.457 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.457 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.457 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.457 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:34.457 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:34.714 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:34.714 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.714 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.714 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:34.714 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:34.714 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.714 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.714 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.714 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.714 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.714 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.714 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.714 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.279 00:18:35.279 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.279 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.279 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.536 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.536 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.536 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.537 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.537 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.537 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.537 { 00:18:35.537 "cntlid": 113, 00:18:35.537 "qid": 0, 00:18:35.537 "state": "enabled", 00:18:35.537 "thread": "nvmf_tgt_poll_group_000", 00:18:35.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:35.537 "listen_address": { 00:18:35.537 "trtype": "TCP", 00:18:35.537 "adrfam": "IPv4", 00:18:35.537 "traddr": "10.0.0.2", 00:18:35.537 "trsvcid": "4420" 00:18:35.537 }, 00:18:35.537 "peer_address": { 00:18:35.537 "trtype": "TCP", 00:18:35.537 "adrfam": "IPv4", 00:18:35.537 "traddr": "10.0.0.1", 00:18:35.537 "trsvcid": "48416" 00:18:35.537 }, 00:18:35.537 "auth": { 00:18:35.537 "state": "completed", 00:18:35.537 "digest": "sha512", 00:18:35.537 "dhgroup": "ffdhe3072" 00:18:35.537 } 00:18:35.537 } 00:18:35.537 ]' 00:18:35.537 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.537 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.537 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.537 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:35.537 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.537 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.537 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.537 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.794 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:18:35.794 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:18:36.735 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.735 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:36.735 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.735 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.735 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.735 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.735 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:36.735 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:36.992 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:36.992 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.992 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:36.992 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:36.992 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:36.992 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.992 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.992 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.992 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.992 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.992 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.992 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.992 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.559 00:18:37.559 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.559 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.559 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.818 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.818 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.818 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.818 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.818 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.818 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.818 { 00:18:37.818 "cntlid": 115, 00:18:37.818 "qid": 0, 00:18:37.818 "state": "enabled", 00:18:37.818 "thread": "nvmf_tgt_poll_group_000", 00:18:37.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:37.818 "listen_address": { 00:18:37.818 "trtype": "TCP", 00:18:37.818 "adrfam": "IPv4", 00:18:37.818 "traddr": "10.0.0.2", 00:18:37.818 "trsvcid": "4420" 00:18:37.818 }, 00:18:37.818 "peer_address": { 00:18:37.818 "trtype": "TCP", 00:18:37.818 "adrfam": "IPv4", 00:18:37.818 "traddr": "10.0.0.1", 00:18:37.818 "trsvcid": "48438" 00:18:37.818 }, 00:18:37.818 "auth": { 00:18:37.818 "state": "completed", 00:18:37.818 "digest": "sha512", 00:18:37.818 "dhgroup": "ffdhe3072" 00:18:37.818 } 00:18:37.818 } 00:18:37.818 ]' 00:18:37.818 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.818 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.818 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.818 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:37.818 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.818 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.818 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.818 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.077 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:18:38.077 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:18:39.012 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.012 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:39.012 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.012 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.012 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.012 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.012 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:39.012 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:39.270 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:39.270 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.270 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.270 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:39.270 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:39.270 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.270 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.270 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.270 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.270 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.270 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.270 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.270 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.836 00:18:39.836 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.836 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.836 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.094 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.094 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.094 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.094 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.094 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.094 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.094 { 00:18:40.094 "cntlid": 117, 00:18:40.094 "qid": 0, 00:18:40.094 "state": "enabled", 00:18:40.094 "thread": "nvmf_tgt_poll_group_000", 00:18:40.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:40.094 "listen_address": { 00:18:40.094 "trtype": "TCP", 00:18:40.094 "adrfam": "IPv4", 00:18:40.094 "traddr": "10.0.0.2", 00:18:40.094 "trsvcid": "4420" 00:18:40.094 }, 00:18:40.094 "peer_address": { 00:18:40.094 "trtype": "TCP", 00:18:40.094 "adrfam": "IPv4", 00:18:40.094 "traddr": "10.0.0.1", 00:18:40.094 "trsvcid": "48472" 00:18:40.094 }, 00:18:40.094 "auth": { 00:18:40.094 "state": "completed", 00:18:40.094 "digest": "sha512", 00:18:40.094 "dhgroup": "ffdhe3072" 00:18:40.094 } 00:18:40.094 } 00:18:40.094 ]' 00:18:40.094 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.094 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.094 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.094 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:40.094 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.094 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.094 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.094 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.353 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:18:40.353 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:18:41.292 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.292 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:41.292 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.292 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.292 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.292 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.292 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:41.292 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:41.549 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:41.549 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.549 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:41.550 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:41.550 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:41.550 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.550 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:41.550 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.550 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.808 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.808 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:41.808 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.808 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.065 00:18:42.065 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.065 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.065 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.323 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.323 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.323 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.323 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.323 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.323 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.323 { 00:18:42.323 "cntlid": 119, 00:18:42.323 "qid": 0, 00:18:42.323 "state": "enabled", 00:18:42.323 "thread": "nvmf_tgt_poll_group_000", 00:18:42.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:42.323 "listen_address": { 00:18:42.323 "trtype": "TCP", 00:18:42.323 "adrfam": "IPv4", 00:18:42.323 "traddr": "10.0.0.2", 00:18:42.323 "trsvcid": "4420" 00:18:42.323 }, 00:18:42.323 "peer_address": { 00:18:42.323 "trtype": "TCP", 00:18:42.323 "adrfam": "IPv4", 00:18:42.323 "traddr": "10.0.0.1", 00:18:42.323 "trsvcid": "39028" 00:18:42.323 }, 00:18:42.323 "auth": { 00:18:42.323 "state": "completed", 00:18:42.323 "digest": "sha512", 00:18:42.323 "dhgroup": "ffdhe3072" 00:18:42.323 } 00:18:42.323 } 00:18:42.323 ]' 00:18:42.323 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.323 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.323 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.323 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:42.323 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.323 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.323 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.323 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.891 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:18:42.891 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:18:43.827 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.827 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:43.827 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.827 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.827 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.827 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.827 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.827 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:43.827 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:43.827 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:43.827 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.827 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:43.827 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:43.827 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:43.827 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.828 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.828 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.828 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.828 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.828 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.828 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.828 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.394 00:18:44.394 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.394 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.394 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.653 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.653 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.653 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.653 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.653 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.653 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.653 { 00:18:44.653 "cntlid": 121, 00:18:44.653 "qid": 0, 00:18:44.653 "state": "enabled", 00:18:44.653 "thread": "nvmf_tgt_poll_group_000", 00:18:44.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:44.653 "listen_address": { 00:18:44.653 "trtype": "TCP", 00:18:44.653 "adrfam": "IPv4", 00:18:44.653 "traddr": "10.0.0.2", 00:18:44.653 "trsvcid": "4420" 00:18:44.653 }, 00:18:44.653 "peer_address": { 00:18:44.653 "trtype": "TCP", 00:18:44.653 "adrfam": "IPv4", 00:18:44.653 "traddr": "10.0.0.1", 00:18:44.653 "trsvcid": "39064" 00:18:44.653 }, 00:18:44.653 "auth": { 00:18:44.653 "state": "completed", 00:18:44.653 "digest": "sha512", 00:18:44.653 "dhgroup": "ffdhe4096" 00:18:44.653 } 00:18:44.653 } 00:18:44.653 ]' 00:18:44.653 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.653 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.653 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.653 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:44.653 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.653 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.653 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.653 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.912 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:18:44.912 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:18:45.845 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.845 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:45.845 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.845 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.845 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.845 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.845 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:45.845 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:46.102 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:46.102 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.102 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:46.102 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:46.102 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:46.102 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.102 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.102 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.102 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.102 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.102 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.102 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.102 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.669 00:18:46.669 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.669 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.669 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.926 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.926 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.926 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.926 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.926 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.926 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.926 { 00:18:46.926 "cntlid": 123, 00:18:46.926 "qid": 0, 00:18:46.926 "state": "enabled", 00:18:46.926 "thread": "nvmf_tgt_poll_group_000", 00:18:46.926 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:46.926 "listen_address": { 00:18:46.926 "trtype": "TCP", 00:18:46.926 "adrfam": "IPv4", 00:18:46.926 "traddr": "10.0.0.2", 00:18:46.926 "trsvcid": "4420" 00:18:46.926 }, 00:18:46.926 "peer_address": { 00:18:46.926 "trtype": "TCP", 00:18:46.926 "adrfam": "IPv4", 00:18:46.926 "traddr": "10.0.0.1", 00:18:46.926 "trsvcid": "39084" 00:18:46.926 }, 00:18:46.926 "auth": { 00:18:46.926 "state": "completed", 00:18:46.926 "digest": "sha512", 00:18:46.926 "dhgroup": "ffdhe4096" 00:18:46.926 } 00:18:46.926 } 00:18:46.926 ]' 00:18:46.926 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.926 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.926 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.926 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:46.926 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.926 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.926 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.926 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.186 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:18:47.186 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:18:48.120 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.120 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:48.120 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.120 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.120 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.120 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.120 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.120 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.378 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:48.378 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.378 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:48.378 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:48.378 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:48.378 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.378 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.378 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.378 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.378 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.378 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.378 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.378 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.947 00:18:48.947 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.947 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.947 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.205 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.205 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.205 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.205 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.205 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.205 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.205 { 00:18:49.205 "cntlid": 125, 00:18:49.205 "qid": 0, 00:18:49.205 "state": "enabled", 00:18:49.205 "thread": "nvmf_tgt_poll_group_000", 00:18:49.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:49.205 "listen_address": { 00:18:49.205 "trtype": "TCP", 00:18:49.205 "adrfam": "IPv4", 00:18:49.205 "traddr": "10.0.0.2", 00:18:49.205 "trsvcid": "4420" 00:18:49.205 }, 00:18:49.205 "peer_address": { 00:18:49.205 "trtype": "TCP", 00:18:49.205 "adrfam": "IPv4", 00:18:49.205 "traddr": "10.0.0.1", 00:18:49.205 "trsvcid": "39104" 00:18:49.205 }, 00:18:49.205 "auth": { 00:18:49.205 "state": "completed", 00:18:49.205 "digest": "sha512", 00:18:49.205 "dhgroup": "ffdhe4096" 00:18:49.205 } 00:18:49.205 } 00:18:49.205 ]' 00:18:49.205 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.205 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.205 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.205 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.205 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.205 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.205 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.205 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.465 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:18:49.465 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:18:50.401 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.401 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:50.401 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.401 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.401 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.401 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.401 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:50.401 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:50.659 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:50.659 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.659 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:50.659 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:50.659 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:50.659 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.659 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:50.659 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.659 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.659 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.659 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:50.659 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:50.659 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:51.231 00:18:51.231 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.231 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.231 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.488 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.488 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.488 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.488 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.488 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.488 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.488 { 00:18:51.488 "cntlid": 127, 00:18:51.488 "qid": 0, 00:18:51.488 "state": "enabled", 00:18:51.488 "thread": "nvmf_tgt_poll_group_000", 00:18:51.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:51.488 "listen_address": { 00:18:51.488 "trtype": "TCP", 00:18:51.488 "adrfam": "IPv4", 00:18:51.488 "traddr": "10.0.0.2", 00:18:51.488 "trsvcid": "4420" 00:18:51.488 }, 00:18:51.488 "peer_address": { 00:18:51.488 "trtype": "TCP", 00:18:51.488 "adrfam": "IPv4", 00:18:51.488 "traddr": "10.0.0.1", 00:18:51.488 "trsvcid": "60654" 00:18:51.488 }, 00:18:51.488 "auth": { 00:18:51.488 "state": "completed", 00:18:51.488 "digest": "sha512", 00:18:51.488 "dhgroup": "ffdhe4096" 00:18:51.488 } 00:18:51.488 } 00:18:51.488 ]' 00:18:51.488 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.488 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.488 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.746 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.746 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.746 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.746 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.746 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.012 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:18:52.012 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:18:52.947 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.947 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:52.947 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.947 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.947 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.947 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.947 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.947 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:52.947 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:53.206 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:53.206 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.206 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.206 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:53.206 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:53.206 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.206 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.206 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.206 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.206 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.206 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.206 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.206 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.775 00:18:53.775 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.775 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.775 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.032 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.032 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.032 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.032 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.032 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.032 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.032 { 00:18:54.032 "cntlid": 129, 00:18:54.032 "qid": 0, 00:18:54.032 "state": "enabled", 00:18:54.033 "thread": "nvmf_tgt_poll_group_000", 00:18:54.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:54.033 "listen_address": { 00:18:54.033 "trtype": "TCP", 00:18:54.033 "adrfam": "IPv4", 00:18:54.033 "traddr": "10.0.0.2", 00:18:54.033 "trsvcid": "4420" 00:18:54.033 }, 00:18:54.033 "peer_address": { 00:18:54.033 "trtype": "TCP", 00:18:54.033 "adrfam": "IPv4", 00:18:54.033 "traddr": "10.0.0.1", 00:18:54.033 "trsvcid": "60690" 00:18:54.033 }, 00:18:54.033 "auth": { 00:18:54.033 "state": "completed", 00:18:54.033 "digest": "sha512", 00:18:54.033 "dhgroup": "ffdhe6144" 00:18:54.033 } 00:18:54.033 } 00:18:54.033 ]' 00:18:54.033 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.033 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.033 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.033 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:54.033 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.033 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.033 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.033 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.291 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:18:54.291 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:18:55.228 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.228 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:55.228 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.228 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.228 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.228 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.228 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.228 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.487 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:55.487 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.487 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:55.487 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:55.487 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:55.487 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.487 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.487 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.487 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.487 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.487 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.487 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.487 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.051 00:18:56.310 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.310 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.310 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.568 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.568 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.568 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.568 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.568 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.568 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.568 { 00:18:56.568 "cntlid": 131, 00:18:56.568 "qid": 0, 00:18:56.568 "state": "enabled", 00:18:56.568 "thread": "nvmf_tgt_poll_group_000", 00:18:56.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:56.568 "listen_address": { 00:18:56.568 "trtype": "TCP", 00:18:56.568 "adrfam": "IPv4", 00:18:56.568 "traddr": "10.0.0.2", 00:18:56.568 "trsvcid": "4420" 00:18:56.568 }, 00:18:56.568 "peer_address": { 00:18:56.568 "trtype": "TCP", 00:18:56.568 "adrfam": "IPv4", 00:18:56.568 "traddr": "10.0.0.1", 00:18:56.568 "trsvcid": "60732" 00:18:56.568 }, 00:18:56.568 "auth": { 00:18:56.568 "state": "completed", 00:18:56.568 "digest": "sha512", 00:18:56.568 "dhgroup": "ffdhe6144" 00:18:56.568 } 00:18:56.568 } 00:18:56.568 ]' 00:18:56.568 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.568 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.568 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.568 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:56.568 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.568 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.568 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.568 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.825 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:18:56.825 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:18:57.760 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.760 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:57.760 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.760 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.760 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.760 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.760 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:57.760 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:58.016 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:58.016 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.016 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:58.016 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:58.016 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:58.016 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.016 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.016 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.016 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.016 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.016 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.016 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.016 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.580 00:18:58.580 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.580 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.580 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.836 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.836 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.836 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.836 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.836 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.836 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.836 { 00:18:58.836 "cntlid": 133, 00:18:58.836 "qid": 0, 00:18:58.836 "state": "enabled", 00:18:58.836 "thread": "nvmf_tgt_poll_group_000", 00:18:58.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:58.836 "listen_address": { 00:18:58.836 "trtype": "TCP", 00:18:58.836 "adrfam": "IPv4", 00:18:58.836 "traddr": "10.0.0.2", 00:18:58.836 "trsvcid": "4420" 00:18:58.836 }, 00:18:58.836 "peer_address": { 00:18:58.836 "trtype": "TCP", 00:18:58.836 "adrfam": "IPv4", 00:18:58.836 "traddr": "10.0.0.1", 00:18:58.836 "trsvcid": "60762" 00:18:58.836 }, 00:18:58.836 "auth": { 00:18:58.836 "state": "completed", 00:18:58.836 "digest": "sha512", 00:18:58.836 "dhgroup": "ffdhe6144" 00:18:58.836 } 00:18:58.836 } 00:18:58.836 ]' 00:18:58.836 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.836 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.836 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.092 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:59.092 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.092 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.092 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.092 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.352 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:18:59.352 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:19:00.291 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.291 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:00.291 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.291 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.291 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.291 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.291 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:00.291 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:00.549 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:00.549 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.549 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:00.549 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:00.549 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:00.549 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.549 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:00.549 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.549 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.549 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.549 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:00.549 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.549 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:01.183 00:19:01.183 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.183 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.183 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.465 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.465 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.465 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.465 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.465 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.465 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.465 { 00:19:01.465 "cntlid": 135, 00:19:01.465 "qid": 0, 00:19:01.465 "state": "enabled", 00:19:01.465 "thread": "nvmf_tgt_poll_group_000", 00:19:01.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:01.465 "listen_address": { 00:19:01.465 "trtype": "TCP", 00:19:01.465 "adrfam": "IPv4", 00:19:01.465 "traddr": "10.0.0.2", 00:19:01.465 "trsvcid": "4420" 00:19:01.465 }, 00:19:01.465 "peer_address": { 00:19:01.465 "trtype": "TCP", 00:19:01.465 "adrfam": "IPv4", 00:19:01.465 "traddr": "10.0.0.1", 00:19:01.465 "trsvcid": "41576" 00:19:01.465 }, 00:19:01.465 "auth": { 00:19:01.465 "state": "completed", 00:19:01.465 "digest": "sha512", 00:19:01.465 "dhgroup": "ffdhe6144" 00:19:01.465 } 00:19:01.465 } 00:19:01.465 ]' 00:19:01.465 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.465 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.465 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.465 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:01.465 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.465 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.465 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.465 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.741 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:19:01.741 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:19:02.678 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.678 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:02.678 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.678 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.678 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.678 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.678 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.678 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:02.678 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:02.937 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:02.937 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.937 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:02.937 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:02.937 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:02.937 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.937 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.937 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.937 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.937 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.937 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.937 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.937 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.873 00:19:03.873 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.873 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.873 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.873 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.873 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.873 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.873 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.873 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.873 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.873 { 00:19:03.873 "cntlid": 137, 00:19:03.873 "qid": 0, 00:19:03.873 "state": "enabled", 00:19:03.873 "thread": "nvmf_tgt_poll_group_000", 00:19:03.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:03.873 "listen_address": { 00:19:03.873 "trtype": "TCP", 00:19:03.873 "adrfam": "IPv4", 00:19:03.873 "traddr": "10.0.0.2", 00:19:03.873 "trsvcid": "4420" 00:19:03.873 }, 00:19:03.873 "peer_address": { 00:19:03.873 "trtype": "TCP", 00:19:03.873 "adrfam": "IPv4", 00:19:03.873 "traddr": "10.0.0.1", 00:19:03.873 "trsvcid": "41602" 00:19:03.873 }, 00:19:03.873 "auth": { 00:19:03.873 "state": "completed", 00:19:03.873 "digest": "sha512", 00:19:03.873 "dhgroup": "ffdhe8192" 00:19:03.873 } 00:19:03.873 } 00:19:03.873 ]' 00:19:03.873 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.131 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.131 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.131 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:04.131 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.131 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.131 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.131 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.390 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:19:04.390 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:19:05.325 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.325 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:05.325 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.325 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.325 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.325 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.325 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:05.325 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:05.582 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:05.582 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.582 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:05.582 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:05.582 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:05.582 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.582 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.582 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.582 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.582 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.582 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.582 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.582 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.517 00:19:06.517 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.517 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.517 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.517 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.517 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.517 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.517 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.774 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.774 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.774 { 00:19:06.774 "cntlid": 139, 00:19:06.774 "qid": 0, 00:19:06.774 "state": "enabled", 00:19:06.774 "thread": "nvmf_tgt_poll_group_000", 00:19:06.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:06.774 "listen_address": { 00:19:06.774 "trtype": "TCP", 00:19:06.774 "adrfam": "IPv4", 00:19:06.774 "traddr": "10.0.0.2", 00:19:06.774 "trsvcid": "4420" 00:19:06.774 }, 00:19:06.774 "peer_address": { 00:19:06.774 "trtype": "TCP", 00:19:06.774 "adrfam": "IPv4", 00:19:06.774 "traddr": "10.0.0.1", 00:19:06.774 "trsvcid": "41632" 00:19:06.774 }, 00:19:06.774 "auth": { 00:19:06.774 "state": "completed", 00:19:06.774 "digest": "sha512", 00:19:06.774 "dhgroup": "ffdhe8192" 00:19:06.774 } 00:19:06.774 } 00:19:06.774 ]' 00:19:06.774 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.774 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.775 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.775 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:06.775 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.775 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.775 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.775 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.033 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:19:07.033 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: --dhchap-ctrl-secret DHHC-1:02:NzIwNGNlNmY1MWIxMjAzMmVlZDVjYmQyN2JmYzVlMzkyMjNmNTI3NDM2Y2M3NjgxbLfXNQ==: 00:19:07.989 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.989 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:07.989 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.989 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.989 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.989 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.989 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:07.989 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:08.246 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:08.246 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.246 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:08.246 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:08.247 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:08.247 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.247 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.247 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.247 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.247 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.247 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.247 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.247 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.181 00:19:09.181 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.181 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.181 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.438 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.438 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.438 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.438 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.438 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.438 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.438 { 00:19:09.438 "cntlid": 141, 00:19:09.438 "qid": 0, 00:19:09.438 "state": "enabled", 00:19:09.438 "thread": "nvmf_tgt_poll_group_000", 00:19:09.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:09.438 "listen_address": { 00:19:09.438 "trtype": "TCP", 00:19:09.438 "adrfam": "IPv4", 00:19:09.438 "traddr": "10.0.0.2", 00:19:09.438 "trsvcid": "4420" 00:19:09.438 }, 00:19:09.438 "peer_address": { 00:19:09.438 "trtype": "TCP", 00:19:09.438 "adrfam": "IPv4", 00:19:09.438 "traddr": "10.0.0.1", 00:19:09.438 "trsvcid": "41672" 00:19:09.438 }, 00:19:09.438 "auth": { 00:19:09.438 "state": "completed", 00:19:09.438 "digest": "sha512", 00:19:09.438 "dhgroup": "ffdhe8192" 00:19:09.438 } 00:19:09.438 } 00:19:09.438 ]' 00:19:09.438 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.438 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.438 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.438 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:09.438 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.695 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.696 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.696 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.955 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:19:09.955 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:01:Y2FiMjc5Njg2ZjdhNDkwNDdlMDEyMTc2MTlkNWJhNGWa61Yo: 00:19:10.889 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.889 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:10.889 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.889 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.889 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.889 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.889 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:10.889 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:11.148 06:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:11.148 06:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.148 06:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:11.148 06:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:11.148 06:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:11.148 06:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.148 06:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:11.148 06:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.148 06:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.148 06:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.148 06:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:11.148 06:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.148 06:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.735 00:19:11.735 06:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.735 06:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.735 06:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.300 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.300 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.300 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.300 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.300 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.300 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.300 { 00:19:12.300 "cntlid": 143, 00:19:12.300 "qid": 0, 00:19:12.300 "state": "enabled", 00:19:12.300 "thread": "nvmf_tgt_poll_group_000", 00:19:12.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:12.300 "listen_address": { 00:19:12.300 "trtype": "TCP", 00:19:12.300 "adrfam": "IPv4", 00:19:12.300 "traddr": "10.0.0.2", 00:19:12.300 "trsvcid": "4420" 00:19:12.300 }, 00:19:12.300 "peer_address": { 00:19:12.300 "trtype": "TCP", 00:19:12.300 "adrfam": "IPv4", 00:19:12.300 "traddr": "10.0.0.1", 00:19:12.300 "trsvcid": "35922" 00:19:12.300 }, 00:19:12.300 "auth": { 00:19:12.300 "state": "completed", 00:19:12.300 "digest": "sha512", 00:19:12.300 "dhgroup": "ffdhe8192" 00:19:12.300 } 00:19:12.300 } 00:19:12.300 ]' 00:19:12.300 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.300 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.300 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.300 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:12.300 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.300 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.300 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.300 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.557 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:19:12.557 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:19:13.490 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.490 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:13.490 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.490 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.490 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.490 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:13.490 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:13.490 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:13.490 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:13.491 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:13.491 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:13.749 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:13.749 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.749 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:13.749 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:13.749 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:13.749 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.749 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.749 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.749 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.749 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.749 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.749 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.749 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.683 00:19:14.683 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.683 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.683 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.683 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.683 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.683 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.683 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.683 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.683 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.683 { 00:19:14.683 "cntlid": 145, 00:19:14.683 "qid": 0, 00:19:14.683 "state": "enabled", 00:19:14.683 "thread": "nvmf_tgt_poll_group_000", 00:19:14.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:14.683 "listen_address": { 00:19:14.683 "trtype": "TCP", 00:19:14.683 "adrfam": "IPv4", 00:19:14.683 "traddr": "10.0.0.2", 00:19:14.683 "trsvcid": "4420" 00:19:14.683 }, 00:19:14.683 "peer_address": { 00:19:14.683 "trtype": "TCP", 00:19:14.683 "adrfam": "IPv4", 00:19:14.683 "traddr": "10.0.0.1", 00:19:14.683 "trsvcid": "35958" 00:19:14.683 }, 00:19:14.683 "auth": { 00:19:14.683 "state": "completed", 00:19:14.683 "digest": "sha512", 00:19:14.684 "dhgroup": "ffdhe8192" 00:19:14.684 } 00:19:14.684 } 00:19:14.684 ]' 00:19:14.684 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.941 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.941 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.941 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:14.941 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.941 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.941 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.941 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.199 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:19:15.199 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZDM0MzczNGNkMDk0ZDA5NWE2NTVhZDljOTZjODc0ODIzOWE2MTA5OTAxN2JlZDQ2ysPawQ==: --dhchap-ctrl-secret DHHC-1:03:NjRkODU1NWQyZDEyOWZjYzljZmFiOTNiZTljZDRmZTc5ZDdiY2Q3YjIzMDFlYjA4ZDk1MWY2ZTZiOGEyNDQxY31CZDQ=: 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:16.133 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:17.071 request: 00:19:17.071 { 00:19:17.071 "name": "nvme0", 00:19:17.071 "trtype": "tcp", 00:19:17.071 "traddr": "10.0.0.2", 00:19:17.071 "adrfam": "ipv4", 00:19:17.071 "trsvcid": "4420", 00:19:17.071 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:17.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:17.071 "prchk_reftag": false, 00:19:17.071 "prchk_guard": false, 00:19:17.071 "hdgst": false, 00:19:17.071 "ddgst": false, 00:19:17.071 "dhchap_key": "key2", 00:19:17.071 "allow_unrecognized_csi": false, 00:19:17.071 "method": "bdev_nvme_attach_controller", 00:19:17.071 "req_id": 1 00:19:17.071 } 00:19:17.071 Got JSON-RPC error response 00:19:17.071 response: 00:19:17.071 { 00:19:17.071 "code": -5, 00:19:17.071 "message": "Input/output error" 00:19:17.071 } 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:17.071 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:18.005 request: 00:19:18.005 { 00:19:18.005 "name": "nvme0", 00:19:18.005 "trtype": "tcp", 00:19:18.005 "traddr": "10.0.0.2", 00:19:18.005 "adrfam": "ipv4", 00:19:18.005 "trsvcid": "4420", 00:19:18.005 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:18.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:18.005 "prchk_reftag": false, 00:19:18.005 "prchk_guard": false, 00:19:18.005 "hdgst": false, 00:19:18.005 "ddgst": false, 00:19:18.005 "dhchap_key": "key1", 00:19:18.005 "dhchap_ctrlr_key": "ckey2", 00:19:18.006 "allow_unrecognized_csi": false, 00:19:18.006 "method": "bdev_nvme_attach_controller", 00:19:18.006 "req_id": 1 00:19:18.006 } 00:19:18.006 Got JSON-RPC error response 00:19:18.006 response: 00:19:18.006 { 00:19:18.006 "code": -5, 00:19:18.006 "message": "Input/output error" 00:19:18.006 } 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.006 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.574 request: 00:19:18.574 { 00:19:18.574 "name": "nvme0", 00:19:18.574 "trtype": "tcp", 00:19:18.574 "traddr": "10.0.0.2", 00:19:18.574 "adrfam": "ipv4", 00:19:18.574 "trsvcid": "4420", 00:19:18.574 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:18.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:18.574 "prchk_reftag": false, 00:19:18.574 "prchk_guard": false, 00:19:18.574 "hdgst": false, 00:19:18.574 "ddgst": false, 00:19:18.574 "dhchap_key": "key1", 00:19:18.574 "dhchap_ctrlr_key": "ckey1", 00:19:18.574 "allow_unrecognized_csi": false, 00:19:18.574 "method": "bdev_nvme_attach_controller", 00:19:18.574 "req_id": 1 00:19:18.574 } 00:19:18.574 Got JSON-RPC error response 00:19:18.574 response: 00:19:18.574 { 00:19:18.574 "code": -5, 00:19:18.574 "message": "Input/output error" 00:19:18.574 } 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1052073 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1052073 ']' 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1052073 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1052073 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1052073' 00:19:18.574 killing process with pid 1052073 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1052073 00:19:18.574 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1052073 00:19:18.833 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:18.833 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:18.833 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:18.833 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.833 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1074906 00:19:18.833 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:18.833 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1074906 00:19:18.833 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1074906 ']' 00:19:18.833 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.833 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.833 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.833 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.833 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.091 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.091 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:19.091 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:19.091 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:19.091 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.091 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.091 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:19.091 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1074906 00:19:19.091 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1074906 ']' 00:19:19.091 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.091 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.091 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.348 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.348 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.606 null0 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.k5B 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.CGh ]] 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CGh 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.A52 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.CDW ]] 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CDW 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Xcc 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.606 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.1Fo ]] 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Fo 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Ufw 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:19.865 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:21.241 nvme0n1 00:19:21.241 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.241 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.241 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.499 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.499 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.499 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.499 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.499 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.499 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.499 { 00:19:21.499 "cntlid": 1, 00:19:21.499 "qid": 0, 00:19:21.499 "state": "enabled", 00:19:21.499 "thread": "nvmf_tgt_poll_group_000", 00:19:21.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:21.499 "listen_address": { 00:19:21.499 "trtype": "TCP", 00:19:21.499 "adrfam": "IPv4", 00:19:21.499 "traddr": "10.0.0.2", 00:19:21.499 "trsvcid": "4420" 00:19:21.499 }, 00:19:21.499 "peer_address": { 00:19:21.499 "trtype": "TCP", 00:19:21.499 "adrfam": "IPv4", 00:19:21.499 "traddr": "10.0.0.1", 00:19:21.499 "trsvcid": "49744" 00:19:21.499 }, 00:19:21.499 "auth": { 00:19:21.499 "state": "completed", 00:19:21.499 "digest": "sha512", 00:19:21.499 "dhgroup": "ffdhe8192" 00:19:21.499 } 00:19:21.499 } 00:19:21.499 ]' 00:19:21.499 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.499 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.499 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.499 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:21.499 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.499 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.499 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.499 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.757 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:19:21.757 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:19:22.692 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.692 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:22.692 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.692 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.692 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.692 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:22.692 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.692 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.692 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.692 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:22.692 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:23.257 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:23.257 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:23.257 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:23.257 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:23.257 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.257 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:23.257 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.257 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:23.258 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.258 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.515 request: 00:19:23.515 { 00:19:23.515 "name": "nvme0", 00:19:23.515 "trtype": "tcp", 00:19:23.515 "traddr": "10.0.0.2", 00:19:23.515 "adrfam": "ipv4", 00:19:23.515 "trsvcid": "4420", 00:19:23.515 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:23.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:23.515 "prchk_reftag": false, 00:19:23.515 "prchk_guard": false, 00:19:23.515 "hdgst": false, 00:19:23.515 "ddgst": false, 00:19:23.515 "dhchap_key": "key3", 00:19:23.515 "allow_unrecognized_csi": false, 00:19:23.515 "method": "bdev_nvme_attach_controller", 00:19:23.515 "req_id": 1 00:19:23.515 } 00:19:23.515 Got JSON-RPC error response 00:19:23.515 response: 00:19:23.515 { 00:19:23.515 "code": -5, 00:19:23.515 "message": "Input/output error" 00:19:23.515 } 00:19:23.515 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:23.515 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:23.515 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:23.515 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:23.515 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:23.515 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:23.515 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:23.515 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:23.774 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:23.774 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:23.774 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:23.774 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:23.774 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.774 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:23.774 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.774 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:23.774 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.774 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.031 request: 00:19:24.031 { 00:19:24.031 "name": "nvme0", 00:19:24.031 "trtype": "tcp", 00:19:24.031 "traddr": "10.0.0.2", 00:19:24.031 "adrfam": "ipv4", 00:19:24.031 "trsvcid": "4420", 00:19:24.031 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:24.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:24.031 "prchk_reftag": false, 00:19:24.031 "prchk_guard": false, 00:19:24.031 "hdgst": false, 00:19:24.031 "ddgst": false, 00:19:24.031 "dhchap_key": "key3", 00:19:24.031 "allow_unrecognized_csi": false, 00:19:24.031 "method": "bdev_nvme_attach_controller", 00:19:24.031 "req_id": 1 00:19:24.031 } 00:19:24.031 Got JSON-RPC error response 00:19:24.031 response: 00:19:24.031 { 00:19:24.031 "code": -5, 00:19:24.031 "message": "Input/output error" 00:19:24.031 } 00:19:24.031 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:24.031 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.031 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.031 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.031 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:24.031 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:24.031 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:24.031 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:24.031 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:24.031 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:24.289 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:24.290 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.290 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.290 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.290 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:24.290 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.290 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.290 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.290 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:24.290 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:24.290 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:24.290 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:24.290 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.290 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:24.290 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.290 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:24.290 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:24.290 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:24.856 request: 00:19:24.856 { 00:19:24.856 "name": "nvme0", 00:19:24.856 "trtype": "tcp", 00:19:24.856 "traddr": "10.0.0.2", 00:19:24.856 "adrfam": "ipv4", 00:19:24.856 "trsvcid": "4420", 00:19:24.856 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:24.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:24.856 "prchk_reftag": false, 00:19:24.856 "prchk_guard": false, 00:19:24.856 "hdgst": false, 00:19:24.856 "ddgst": false, 00:19:24.856 "dhchap_key": "key0", 00:19:24.856 "dhchap_ctrlr_key": "key1", 00:19:24.856 "allow_unrecognized_csi": false, 00:19:24.856 "method": "bdev_nvme_attach_controller", 00:19:24.856 "req_id": 1 00:19:24.856 } 00:19:24.856 Got JSON-RPC error response 00:19:24.856 response: 00:19:24.856 { 00:19:24.856 "code": -5, 00:19:24.856 "message": "Input/output error" 00:19:24.856 } 00:19:24.856 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:24.856 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.856 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.856 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.856 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:24.856 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:24.856 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:25.118 nvme0n1 00:19:25.118 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:25.118 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:25.118 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.375 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.375 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.375 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.940 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:19:25.940 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.940 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.940 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.940 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:25.940 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:25.940 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:27.319 nvme0n1 00:19:27.319 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:27.319 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:27.319 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.319 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.319 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:27.319 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.319 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.577 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.577 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:27.577 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:27.577 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.837 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.837 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:19:27.837 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: --dhchap-ctrl-secret DHHC-1:03:NzczMjMwNDVjNzE1YzlmZjIwZTIwN2JhOGQzYjQ0YzEyNTJlYmEzMTQ2MDVkNWY4ZDI4NDkzOWE2YWY1Y2QxZTSe1L0=: 00:19:28.776 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:28.776 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:28.776 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:28.776 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:28.776 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:28.776 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:28.776 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:28.776 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.776 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.035 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:29.035 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:29.035 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:29.035 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:29.035 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.035 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:29.035 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.035 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:29.035 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:29.035 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:29.606 request: 00:19:29.606 { 00:19:29.606 "name": "nvme0", 00:19:29.606 "trtype": "tcp", 00:19:29.606 "traddr": "10.0.0.2", 00:19:29.606 "adrfam": "ipv4", 00:19:29.606 "trsvcid": "4420", 00:19:29.606 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:29.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:29.606 "prchk_reftag": false, 00:19:29.606 "prchk_guard": false, 00:19:29.606 "hdgst": false, 00:19:29.606 "ddgst": false, 00:19:29.606 "dhchap_key": "key1", 00:19:29.606 "allow_unrecognized_csi": false, 00:19:29.606 "method": "bdev_nvme_attach_controller", 00:19:29.606 "req_id": 1 00:19:29.606 } 00:19:29.606 Got JSON-RPC error response 00:19:29.606 response: 00:19:29.606 { 00:19:29.606 "code": -5, 00:19:29.606 "message": "Input/output error" 00:19:29.606 } 00:19:29.864 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:29.864 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:29.864 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:29.864 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:29.864 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:29.864 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:29.864 06:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:31.250 nvme0n1 00:19:31.250 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:31.250 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:31.250 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.507 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.507 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.507 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.764 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:31.764 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.764 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.764 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.764 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:31.764 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:31.764 06:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:32.022 nvme0n1 00:19:32.022 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:32.022 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:32.022 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.311 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.311 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.311 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.595 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:32.595 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.595 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.595 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.595 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: '' 2s 00:19:32.595 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:32.595 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:32.595 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: 00:19:32.595 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:32.595 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:32.595 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:32.595 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: ]] 00:19:32.595 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OWRhMTk3ZjFjNTE0MTVjMTQ3NmQwYTBmYjQwMTAyOTK2dPRs: 00:19:32.595 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:32.595 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:32.595 06:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:34.510 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:34.510 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:34.510 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:34.510 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:34.510 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:34.510 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:34.767 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:34.768 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:34.768 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.768 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.768 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.768 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: 2s 00:19:34.768 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:34.768 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:34.768 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:34.768 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: 00:19:34.768 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:34.768 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:34.768 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:34.768 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: ]] 00:19:34.768 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTdiNjEwY2I2M2IyOWZiNmRiMWIxMTYyY2FiMjhkODUwOWNlMTc2YzViODJhZGQ5uiReAQ==: 00:19:34.768 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:34.768 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:36.670 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:36.670 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:36.670 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:36.670 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:36.670 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:36.670 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:36.670 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:36.670 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.670 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:36.670 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.670 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.670 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.670 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:36.670 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:36.670 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:38.049 nvme0n1 00:19:38.049 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:38.050 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.050 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.050 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.050 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:38.050 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:38.987 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:38.987 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:38.987 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.244 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.244 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:39.244 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.244 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.245 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.245 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:39.245 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:39.503 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:39.503 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:39.503 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.761 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.761 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:39.761 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.761 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.761 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.761 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:39.761 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:39.761 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:39.761 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:39.761 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.761 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:39.761 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.761 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:39.761 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:40.697 request: 00:19:40.697 { 00:19:40.697 "name": "nvme0", 00:19:40.697 "dhchap_key": "key1", 00:19:40.697 "dhchap_ctrlr_key": "key3", 00:19:40.697 "method": "bdev_nvme_set_keys", 00:19:40.697 "req_id": 1 00:19:40.697 } 00:19:40.697 Got JSON-RPC error response 00:19:40.697 response: 00:19:40.697 { 00:19:40.697 "code": -13, 00:19:40.697 "message": "Permission denied" 00:19:40.697 } 00:19:40.697 06:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:40.697 06:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:40.697 06:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:40.697 06:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:40.697 06:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:40.697 06:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:40.697 06:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.957 06:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:40.957 06:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:41.893 06:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:41.893 06:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:41.894 06:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.151 06:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:42.151 06:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:42.151 06:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.151 06:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.151 06:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.151 06:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:42.151 06:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:42.151 06:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:43.526 nvme0n1 00:19:43.526 06:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:43.526 06:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.526 06:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.526 06:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.526 06:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:43.526 06:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:43.526 06:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:43.526 06:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:43.526 06:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:43.526 06:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:43.526 06:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:43.526 06:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:43.526 06:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:44.457 request: 00:19:44.457 { 00:19:44.457 "name": "nvme0", 00:19:44.457 "dhchap_key": "key2", 00:19:44.457 "dhchap_ctrlr_key": "key0", 00:19:44.457 "method": "bdev_nvme_set_keys", 00:19:44.457 "req_id": 1 00:19:44.457 } 00:19:44.457 Got JSON-RPC error response 00:19:44.457 response: 00:19:44.457 { 00:19:44.457 "code": -13, 00:19:44.457 "message": "Permission denied" 00:19:44.457 } 00:19:44.457 06:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:44.457 06:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:44.457 06:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:44.457 06:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:44.457 06:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:44.457 06:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:44.457 06:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.713 06:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:44.713 06:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:45.648 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:45.648 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:45.648 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.905 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:45.905 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:45.905 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:45.905 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1052218 00:19:45.905 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1052218 ']' 00:19:45.905 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1052218 00:19:45.905 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:45.905 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.905 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1052218 00:19:45.905 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:45.905 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:45.905 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1052218' 00:19:45.905 killing process with pid 1052218 00:19:45.905 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1052218 00:19:45.905 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1052218 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:46.473 rmmod nvme_tcp 00:19:46.473 rmmod nvme_fabrics 00:19:46.473 rmmod nvme_keyring 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1074906 ']' 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1074906 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1074906 ']' 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1074906 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1074906 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1074906' 00:19:46.473 killing process with pid 1074906 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1074906 00:19:46.473 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1074906 00:19:46.732 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:46.732 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:46.732 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:46.732 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:46.732 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:46.732 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:46.732 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:46.732 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:46.732 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:46.732 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.732 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:46.732 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.640 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:48.640 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.k5B /tmp/spdk.key-sha256.A52 /tmp/spdk.key-sha384.Xcc /tmp/spdk.key-sha512.Ufw /tmp/spdk.key-sha512.CGh /tmp/spdk.key-sha384.CDW /tmp/spdk.key-sha256.1Fo '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:48.640 00:19:48.640 real 3m31.201s 00:19:48.640 user 8m16.379s 00:19:48.640 sys 0m27.645s 00:19:48.640 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.640 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.640 ************************************ 00:19:48.640 END TEST nvmf_auth_target 00:19:48.640 ************************************ 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:48.899 ************************************ 00:19:48.899 START TEST nvmf_bdevio_no_huge 00:19:48.899 ************************************ 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:48.899 * Looking for test storage... 00:19:48.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:48.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.899 --rc genhtml_branch_coverage=1 00:19:48.899 --rc genhtml_function_coverage=1 00:19:48.899 --rc genhtml_legend=1 00:19:48.899 --rc geninfo_all_blocks=1 00:19:48.899 --rc geninfo_unexecuted_blocks=1 00:19:48.899 00:19:48.899 ' 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:48.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.899 --rc genhtml_branch_coverage=1 00:19:48.899 --rc genhtml_function_coverage=1 00:19:48.899 --rc genhtml_legend=1 00:19:48.899 --rc geninfo_all_blocks=1 00:19:48.899 --rc geninfo_unexecuted_blocks=1 00:19:48.899 00:19:48.899 ' 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:48.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.899 --rc genhtml_branch_coverage=1 00:19:48.899 --rc genhtml_function_coverage=1 00:19:48.899 --rc genhtml_legend=1 00:19:48.899 --rc geninfo_all_blocks=1 00:19:48.899 --rc geninfo_unexecuted_blocks=1 00:19:48.899 00:19:48.899 ' 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:48.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.899 --rc genhtml_branch_coverage=1 00:19:48.899 --rc genhtml_function_coverage=1 00:19:48.899 --rc genhtml_legend=1 00:19:48.899 --rc geninfo_all_blocks=1 00:19:48.899 --rc geninfo_unexecuted_blocks=1 00:19:48.899 00:19:48.899 ' 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.899 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:48.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:48.900 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.431 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:51.431 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:51.431 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:51.431 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:51.431 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:51.431 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:51.432 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:51.432 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:51.432 Found net devices under 0000:84:00.0: cvl_0_0 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:51.432 Found net devices under 0000:84:00.1: cvl_0_1 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:51.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:19:51.432 00:19:51.432 --- 10.0.0.2 ping statistics --- 00:19:51.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.432 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:51.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:19:51.432 00:19:51.432 --- 10.0.0.1 ping statistics --- 00:19:51.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.432 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:19:51.432 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1080203 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1080203 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1080203 ']' 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.433 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.433 [2024-12-08 06:23:41.298964] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:19:51.433 [2024-12-08 06:23:41.299065] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:51.433 [2024-12-08 06:23:41.377828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:51.433 [2024-12-08 06:23:41.433022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.433 [2024-12-08 06:23:41.433088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.433 [2024-12-08 06:23:41.433116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.433 [2024-12-08 06:23:41.433127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.433 [2024-12-08 06:23:41.433136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.433 [2024-12-08 06:23:41.434316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:51.433 [2024-12-08 06:23:41.434390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:51.433 [2024-12-08 06:23:41.434387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:51.433 [2024-12-08 06:23:41.434368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.691 [2024-12-08 06:23:41.594305] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.691 Malloc0 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.691 [2024-12-08 06:23:41.632632] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:51.691 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:51.691 { 00:19:51.691 "params": { 00:19:51.691 "name": "Nvme$subsystem", 00:19:51.692 "trtype": "$TEST_TRANSPORT", 00:19:51.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.692 "adrfam": "ipv4", 00:19:51.692 "trsvcid": "$NVMF_PORT", 00:19:51.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.692 "hdgst": ${hdgst:-false}, 00:19:51.692 "ddgst": ${ddgst:-false} 00:19:51.692 }, 00:19:51.692 "method": "bdev_nvme_attach_controller" 00:19:51.692 } 00:19:51.692 EOF 00:19:51.692 )") 00:19:51.692 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:51.692 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:51.692 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:51.692 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:51.692 "params": { 00:19:51.692 "name": "Nvme1", 00:19:51.692 "trtype": "tcp", 00:19:51.692 "traddr": "10.0.0.2", 00:19:51.692 "adrfam": "ipv4", 00:19:51.692 "trsvcid": "4420", 00:19:51.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.692 "hdgst": false, 00:19:51.692 "ddgst": false 00:19:51.692 }, 00:19:51.692 "method": "bdev_nvme_attach_controller" 00:19:51.692 }' 00:19:51.692 [2024-12-08 06:23:41.683146] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:19:51.692 [2024-12-08 06:23:41.683215] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1080226 ] 00:19:51.692 [2024-12-08 06:23:41.755366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:51.949 [2024-12-08 06:23:41.820693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.949 [2024-12-08 06:23:41.820748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.949 [2024-12-08 06:23:41.820752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.207 I/O targets: 00:19:52.207 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:52.207 00:19:52.207 00:19:52.207 CUnit - A unit testing framework for C - Version 2.1-3 00:19:52.207 http://cunit.sourceforge.net/ 00:19:52.207 00:19:52.207 00:19:52.207 Suite: bdevio tests on: Nvme1n1 00:19:52.207 Test: blockdev write read block ...passed 00:19:52.207 Test: blockdev write zeroes read block ...passed 00:19:52.207 Test: blockdev write zeroes read no split ...passed 00:19:52.207 Test: blockdev write zeroes read split ...passed 00:19:52.207 Test: blockdev write zeroes read split partial ...passed 00:19:52.207 Test: blockdev reset ...[2024-12-08 06:23:42.288564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:52.207 [2024-12-08 06:23:42.288713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a899f0 (9): Bad file descriptor 00:19:52.465 [2024-12-08 06:23:42.347116] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:52.465 passed 00:19:52.465 Test: blockdev write read 8 blocks ...passed 00:19:52.465 Test: blockdev write read size > 128k ...passed 00:19:52.465 Test: blockdev write read invalid size ...passed 00:19:52.465 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:52.465 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:52.465 Test: blockdev write read max offset ...passed 00:19:52.465 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:52.465 Test: blockdev writev readv 8 blocks ...passed 00:19:52.465 Test: blockdev writev readv 30 x 1block ...passed 00:19:52.465 Test: blockdev writev readv block ...passed 00:19:52.465 Test: blockdev writev readv size > 128k ...passed 00:19:52.465 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:52.465 Test: blockdev comparev and writev ...[2024-12-08 06:23:42.520924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.465 [2024-12-08 06:23:42.520961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:52.465 [2024-12-08 06:23:42.520986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.465 [2024-12-08 06:23:42.521004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:52.465 [2024-12-08 06:23:42.521403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.465 [2024-12-08 06:23:42.521434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:52.465 [2024-12-08 06:23:42.521457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.465 [2024-12-08 06:23:42.521473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:52.465 [2024-12-08 06:23:42.521868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.465 [2024-12-08 06:23:42.521892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:52.465 [2024-12-08 06:23:42.521914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.465 [2024-12-08 06:23:42.521930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:52.465 [2024-12-08 06:23:42.522369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.465 [2024-12-08 06:23:42.522393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:52.465 [2024-12-08 06:23:42.522414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.465 [2024-12-08 06:23:42.522430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:52.465 passed 00:19:52.722 Test: blockdev nvme passthru rw ...passed 00:19:52.722 Test: blockdev nvme passthru vendor specific ...[2024-12-08 06:23:42.605187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:52.722 [2024-12-08 06:23:42.605218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:52.722 [2024-12-08 06:23:42.605430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:52.722 [2024-12-08 06:23:42.605455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:52.722 [2024-12-08 06:23:42.605615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:52.722 [2024-12-08 06:23:42.605638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:52.722 [2024-12-08 06:23:42.605805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:52.722 [2024-12-08 06:23:42.605829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:52.722 passed 00:19:52.723 Test: blockdev nvme admin passthru ...passed 00:19:52.723 Test: blockdev copy ...passed 00:19:52.723 00:19:52.723 Run Summary: Type Total Ran Passed Failed Inactive 00:19:52.723 suites 1 1 n/a 0 0 00:19:52.723 tests 23 23 23 0 0 00:19:52.723 asserts 152 152 152 0 n/a 00:19:52.723 00:19:52.723 Elapsed time = 0.984 seconds 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:52.980 rmmod nvme_tcp 00:19:52.980 rmmod nvme_fabrics 00:19:52.980 rmmod nvme_keyring 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1080203 ']' 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1080203 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1080203 ']' 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1080203 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.980 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1080203 00:19:53.239 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:53.239 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:53.239 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1080203' 00:19:53.239 killing process with pid 1080203 00:19:53.239 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1080203 00:19:53.239 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1080203 00:19:53.497 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:53.497 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:53.497 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:53.497 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:53.497 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:53.497 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:53.497 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:53.497 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:53.497 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:53.497 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.497 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.497 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.035 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:56.035 00:19:56.035 real 0m6.753s 00:19:56.035 user 0m11.180s 00:19:56.035 sys 0m2.671s 00:19:56.035 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:56.035 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:56.035 ************************************ 00:19:56.035 END TEST nvmf_bdevio_no_huge 00:19:56.035 ************************************ 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:56.036 ************************************ 00:19:56.036 START TEST nvmf_tls 00:19:56.036 ************************************ 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:56.036 * Looking for test storage... 00:19:56.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:56.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.036 --rc genhtml_branch_coverage=1 00:19:56.036 --rc genhtml_function_coverage=1 00:19:56.036 --rc genhtml_legend=1 00:19:56.036 --rc geninfo_all_blocks=1 00:19:56.036 --rc geninfo_unexecuted_blocks=1 00:19:56.036 00:19:56.036 ' 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:56.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.036 --rc genhtml_branch_coverage=1 00:19:56.036 --rc genhtml_function_coverage=1 00:19:56.036 --rc genhtml_legend=1 00:19:56.036 --rc geninfo_all_blocks=1 00:19:56.036 --rc geninfo_unexecuted_blocks=1 00:19:56.036 00:19:56.036 ' 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:56.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.036 --rc genhtml_branch_coverage=1 00:19:56.036 --rc genhtml_function_coverage=1 00:19:56.036 --rc genhtml_legend=1 00:19:56.036 --rc geninfo_all_blocks=1 00:19:56.036 --rc geninfo_unexecuted_blocks=1 00:19:56.036 00:19:56.036 ' 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:56.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.036 --rc genhtml_branch_coverage=1 00:19:56.036 --rc genhtml_function_coverage=1 00:19:56.036 --rc genhtml_legend=1 00:19:56.036 --rc geninfo_all_blocks=1 00:19:56.036 --rc geninfo_unexecuted_blocks=1 00:19:56.036 00:19:56.036 ' 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.036 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.037 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:56.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:56.037 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:56.037 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:56.037 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:56.037 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:56.037 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:56.037 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:56.037 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.037 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:56.037 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:56.037 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:56.037 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.037 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:56.037 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.037 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:56.037 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:56.037 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:56.037 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:57.937 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:57.937 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:57.938 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:57.938 Found net devices under 0000:84:00.0: cvl_0_0 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:57.938 Found net devices under 0000:84:00.1: cvl_0_1 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:57.938 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:57.938 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:57.938 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:57.938 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:57.938 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:57.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:19:57.938 00:19:57.938 --- 10.0.0.2 ping statistics --- 00:19:57.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.938 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:19:57.938 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:57.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:19:57.938 00:19:57.938 --- 10.0.0.1 ping statistics --- 00:19:57.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.938 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:19:57.938 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.938 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:57.938 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:57.938 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.938 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:57.938 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:57.938 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.938 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:57.938 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:57.938 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:57.938 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:57.938 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:57.938 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.195 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1082444 00:19:58.195 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:58.195 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1082444 00:19:58.195 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1082444 ']' 00:19:58.195 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.195 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.195 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.195 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.195 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.195 [2024-12-08 06:23:48.109148] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:19:58.195 [2024-12-08 06:23:48.109242] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.195 [2024-12-08 06:23:48.182299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.195 [2024-12-08 06:23:48.234346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.195 [2024-12-08 06:23:48.234408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.195 [2024-12-08 06:23:48.234428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.195 [2024-12-08 06:23:48.234438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.196 [2024-12-08 06:23:48.234448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.196 [2024-12-08 06:23:48.235131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.452 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.452 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:58.452 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:58.452 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:58.452 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.452 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.452 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:58.452 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:58.710 true 00:19:58.710 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:58.710 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:58.967 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:58.967 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:58.967 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:59.225 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:59.225 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:59.482 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:59.482 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:59.482 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:59.740 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:59.740 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:59.997 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:59.997 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:59.997 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:59.997 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:00.255 06:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:00.255 06:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:00.255 06:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:00.512 06:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:00.512 06:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:00.770 06:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:00.770 06:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:00.770 06:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:01.029 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:01.029 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:01.287 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:01.287 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:01.287 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:01.287 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:01.287 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:01.287 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:01.287 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:01.287 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:01.287 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:01.287 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:01.287 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:01.287 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:01.287 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:01.287 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:01.287 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:01.287 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:01.287 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:01.546 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:01.546 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:01.546 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Fup7OwbOQ1 00:20:01.546 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:01.547 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.C1vWVCV4R9 00:20:01.547 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:01.547 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:01.547 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Fup7OwbOQ1 00:20:01.547 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.C1vWVCV4R9 00:20:01.547 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:01.805 06:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:02.065 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Fup7OwbOQ1 00:20:02.065 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Fup7OwbOQ1 00:20:02.065 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:02.322 [2024-12-08 06:23:52.435902] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.580 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:02.838 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:03.097 [2024-12-08 06:23:52.981407] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.097 [2024-12-08 06:23:52.981699] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.097 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:03.355 malloc0 00:20:03.355 06:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:03.614 06:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Fup7OwbOQ1 00:20:03.873 06:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:04.131 06:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Fup7OwbOQ1 00:20:14.165 Initializing NVMe Controllers 00:20:14.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:14.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:14.165 Initialization complete. Launching workers. 00:20:14.165 ======================================================== 00:20:14.165 Latency(us) 00:20:14.165 Device Information : IOPS MiB/s Average min max 00:20:14.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8803.46 34.39 7271.08 1247.66 9096.68 00:20:14.165 ======================================================== 00:20:14.165 Total : 8803.46 34.39 7271.08 1247.66 9096.68 00:20:14.165 00:20:14.165 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Fup7OwbOQ1 00:20:14.165 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:14.165 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:14.165 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:14.165 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Fup7OwbOQ1 00:20:14.165 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:14.165 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1084462 00:20:14.165 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:14.165 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1084462 /var/tmp/bdevperf.sock 00:20:14.165 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1084462 ']' 00:20:14.165 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:14.165 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.165 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.165 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.165 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.165 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.422 [2024-12-08 06:24:04.290760] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:20:14.422 [2024-12-08 06:24:04.290857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084462 ] 00:20:14.422 [2024-12-08 06:24:04.358077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.422 [2024-12-08 06:24:04.414624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.422 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.422 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:14.422 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Fup7OwbOQ1 00:20:14.680 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:15.246 [2024-12-08 06:24:05.060604] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.246 TLSTESTn1 00:20:15.246 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:15.246 Running I/O for 10 seconds... 00:20:17.551 3548.00 IOPS, 13.86 MiB/s [2024-12-08T05:24:08.605Z] 3522.00 IOPS, 13.76 MiB/s [2024-12-08T05:24:09.540Z] 3525.00 IOPS, 13.77 MiB/s [2024-12-08T05:24:10.476Z] 3540.75 IOPS, 13.83 MiB/s [2024-12-08T05:24:11.411Z] 3544.80 IOPS, 13.85 MiB/s [2024-12-08T05:24:12.346Z] 3536.83 IOPS, 13.82 MiB/s [2024-12-08T05:24:13.279Z] 3527.14 IOPS, 13.78 MiB/s [2024-12-08T05:24:14.652Z] 3543.38 IOPS, 13.84 MiB/s [2024-12-08T05:24:15.587Z] 3553.22 IOPS, 13.88 MiB/s [2024-12-08T05:24:15.587Z] 3550.00 IOPS, 13.87 MiB/s 00:20:25.468 Latency(us) 00:20:25.468 [2024-12-08T05:24:15.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.468 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:25.468 Verification LBA range: start 0x0 length 0x2000 00:20:25.468 TLSTESTn1 : 10.02 3554.10 13.88 0.00 0.00 35950.83 6140.97 29515.47 00:20:25.468 [2024-12-08T05:24:15.587Z] =================================================================================================================== 00:20:25.468 [2024-12-08T05:24:15.587Z] Total : 3554.10 13.88 0.00 0.00 35950.83 6140.97 29515.47 00:20:25.468 { 00:20:25.468 "results": [ 00:20:25.468 { 00:20:25.468 "job": "TLSTESTn1", 00:20:25.468 "core_mask": "0x4", 00:20:25.468 "workload": "verify", 00:20:25.468 "status": "finished", 00:20:25.468 "verify_range": { 00:20:25.468 "start": 0, 00:20:25.468 "length": 8192 00:20:25.468 }, 00:20:25.468 "queue_depth": 128, 00:20:25.468 "io_size": 4096, 00:20:25.468 "runtime": 10.023905, 00:20:25.468 "iops": 3554.1039145921673, 00:20:25.469 "mibps": 13.883218416375653, 00:20:25.469 "io_failed": 0, 00:20:25.469 "io_timeout": 0, 00:20:25.469 "avg_latency_us": 35950.82699468345, 00:20:25.469 "min_latency_us": 6140.965925925926, 00:20:25.469 "max_latency_us": 29515.472592592592 00:20:25.469 } 00:20:25.469 ], 00:20:25.469 "core_count": 1 00:20:25.469 } 00:20:25.469 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:25.469 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1084462 00:20:25.469 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1084462 ']' 00:20:25.469 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1084462 00:20:25.469 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:25.469 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.469 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1084462 00:20:25.469 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:25.469 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:25.469 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1084462' 00:20:25.469 killing process with pid 1084462 00:20:25.469 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1084462 00:20:25.469 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.469 00:20:25.469 Latency(us) 00:20:25.469 [2024-12-08T05:24:15.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.469 [2024-12-08T05:24:15.588Z] =================================================================================================================== 00:20:25.469 [2024-12-08T05:24:15.588Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.469 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1084462 00:20:25.469 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.C1vWVCV4R9 00:20:25.469 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:25.469 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.C1vWVCV4R9 00:20:25.469 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:25.469 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.469 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:25.726 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.726 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.C1vWVCV4R9 00:20:25.726 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:25.726 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:25.726 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:25.726 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.C1vWVCV4R9 00:20:25.726 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:25.726 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1086291 00:20:25.726 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:25.726 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:25.726 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1086291 /var/tmp/bdevperf.sock 00:20:25.726 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1086291 ']' 00:20:25.726 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.726 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.726 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.726 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.726 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.726 [2024-12-08 06:24:15.636765] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:20:25.726 [2024-12-08 06:24:15.636863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086291 ] 00:20:25.726 [2024-12-08 06:24:15.702270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.726 [2024-12-08 06:24:15.756586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.982 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.982 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:25.982 06:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.C1vWVCV4R9 00:20:26.238 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:26.497 [2024-12-08 06:24:16.384768] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.497 [2024-12-08 06:24:16.396828] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:26.497 [2024-12-08 06:24:16.397172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3f580 (107): Transport endpoint is not connected 00:20:26.497 [2024-12-08 06:24:16.398163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3f580 (9): Bad file descriptor 00:20:26.497 [2024-12-08 06:24:16.399163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:26.497 [2024-12-08 06:24:16.399184] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:26.497 [2024-12-08 06:24:16.399198] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:26.497 [2024-12-08 06:24:16.399216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:26.497 request: 00:20:26.497 { 00:20:26.497 "name": "TLSTEST", 00:20:26.497 "trtype": "tcp", 00:20:26.497 "traddr": "10.0.0.2", 00:20:26.497 "adrfam": "ipv4", 00:20:26.497 "trsvcid": "4420", 00:20:26.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:26.497 "prchk_reftag": false, 00:20:26.497 "prchk_guard": false, 00:20:26.497 "hdgst": false, 00:20:26.497 "ddgst": false, 00:20:26.497 "psk": "key0", 00:20:26.497 "allow_unrecognized_csi": false, 00:20:26.497 "method": "bdev_nvme_attach_controller", 00:20:26.497 "req_id": 1 00:20:26.497 } 00:20:26.497 Got JSON-RPC error response 00:20:26.497 response: 00:20:26.497 { 00:20:26.497 "code": -5, 00:20:26.497 "message": "Input/output error" 00:20:26.497 } 00:20:26.497 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1086291 00:20:26.497 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1086291 ']' 00:20:26.497 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1086291 00:20:26.497 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:26.497 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.497 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1086291 00:20:26.497 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:26.497 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:26.497 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1086291' 00:20:26.497 killing process with pid 1086291 00:20:26.497 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1086291 00:20:26.497 Received shutdown signal, test time was about 10.000000 seconds 00:20:26.497 00:20:26.497 Latency(us) 00:20:26.497 [2024-12-08T05:24:16.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.497 [2024-12-08T05:24:16.616Z] =================================================================================================================== 00:20:26.497 [2024-12-08T05:24:16.616Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:26.497 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1086291 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Fup7OwbOQ1 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Fup7OwbOQ1 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Fup7OwbOQ1 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Fup7OwbOQ1 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1086432 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1086432 /var/tmp/bdevperf.sock 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1086432 ']' 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.757 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.757 [2024-12-08 06:24:16.730644] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:20:26.757 [2024-12-08 06:24:16.730750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086432 ] 00:20:26.757 [2024-12-08 06:24:16.797196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.757 [2024-12-08 06:24:16.854759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.032 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.032 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:27.032 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Fup7OwbOQ1 00:20:27.289 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:27.549 [2024-12-08 06:24:17.491947] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.549 [2024-12-08 06:24:17.499511] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:27.549 [2024-12-08 06:24:17.499546] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:27.549 [2024-12-08 06:24:17.499595] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:27.549 [2024-12-08 06:24:17.500339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b4580 (107): Transport endpoint is not connected 00:20:27.549 [2024-12-08 06:24:17.501330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b4580 (9): Bad file descriptor 00:20:27.549 [2024-12-08 06:24:17.502329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:27.549 [2024-12-08 06:24:17.502349] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:27.549 [2024-12-08 06:24:17.502363] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:27.549 [2024-12-08 06:24:17.502383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:27.549 request: 00:20:27.549 { 00:20:27.549 "name": "TLSTEST", 00:20:27.549 "trtype": "tcp", 00:20:27.549 "traddr": "10.0.0.2", 00:20:27.549 "adrfam": "ipv4", 00:20:27.549 "trsvcid": "4420", 00:20:27.549 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.549 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:27.549 "prchk_reftag": false, 00:20:27.549 "prchk_guard": false, 00:20:27.549 "hdgst": false, 00:20:27.549 "ddgst": false, 00:20:27.549 "psk": "key0", 00:20:27.549 "allow_unrecognized_csi": false, 00:20:27.549 "method": "bdev_nvme_attach_controller", 00:20:27.549 "req_id": 1 00:20:27.549 } 00:20:27.549 Got JSON-RPC error response 00:20:27.549 response: 00:20:27.549 { 00:20:27.549 "code": -5, 00:20:27.549 "message": "Input/output error" 00:20:27.549 } 00:20:27.549 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1086432 00:20:27.549 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1086432 ']' 00:20:27.549 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1086432 00:20:27.549 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:27.549 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.549 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1086432 00:20:27.549 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:27.549 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:27.549 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1086432' 00:20:27.549 killing process with pid 1086432 00:20:27.549 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1086432 00:20:27.549 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.549 00:20:27.549 Latency(us) 00:20:27.549 [2024-12-08T05:24:17.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.549 [2024-12-08T05:24:17.668Z] =================================================================================================================== 00:20:27.549 [2024-12-08T05:24:17.668Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:27.549 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1086432 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Fup7OwbOQ1 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Fup7OwbOQ1 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Fup7OwbOQ1 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Fup7OwbOQ1 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1086572 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1086572 /var/tmp/bdevperf.sock 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1086572 ']' 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.814 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.814 [2024-12-08 06:24:17.830422] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:20:27.814 [2024-12-08 06:24:17.830511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086572 ] 00:20:27.814 [2024-12-08 06:24:17.896646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.073 [2024-12-08 06:24:17.955054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.073 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.073 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:28.073 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Fup7OwbOQ1 00:20:28.332 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:28.590 [2024-12-08 06:24:18.600265] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:28.590 [2024-12-08 06:24:18.612443] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:28.590 [2024-12-08 06:24:18.612476] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:28.590 [2024-12-08 06:24:18.612515] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:28.590 [2024-12-08 06:24:18.612689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1335580 (107): Transport endpoint is not connected 00:20:28.590 [2024-12-08 06:24:18.613679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1335580 (9): Bad file descriptor 00:20:28.590 [2024-12-08 06:24:18.614678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:28.590 [2024-12-08 06:24:18.614713] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:28.591 [2024-12-08 06:24:18.614734] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:28.591 [2024-12-08 06:24:18.614754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:28.591 request: 00:20:28.591 { 00:20:28.591 "name": "TLSTEST", 00:20:28.591 "trtype": "tcp", 00:20:28.591 "traddr": "10.0.0.2", 00:20:28.591 "adrfam": "ipv4", 00:20:28.591 "trsvcid": "4420", 00:20:28.591 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:28.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:28.591 "prchk_reftag": false, 00:20:28.591 "prchk_guard": false, 00:20:28.591 "hdgst": false, 00:20:28.591 "ddgst": false, 00:20:28.591 "psk": "key0", 00:20:28.591 "allow_unrecognized_csi": false, 00:20:28.591 "method": "bdev_nvme_attach_controller", 00:20:28.591 "req_id": 1 00:20:28.591 } 00:20:28.591 Got JSON-RPC error response 00:20:28.591 response: 00:20:28.591 { 00:20:28.591 "code": -5, 00:20:28.591 "message": "Input/output error" 00:20:28.591 } 00:20:28.591 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1086572 00:20:28.591 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1086572 ']' 00:20:28.591 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1086572 00:20:28.591 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:28.591 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.591 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1086572 00:20:28.591 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:28.591 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:28.591 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1086572' 00:20:28.591 killing process with pid 1086572 00:20:28.591 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1086572 00:20:28.591 Received shutdown signal, test time was about 10.000000 seconds 00:20:28.591 00:20:28.591 Latency(us) 00:20:28.591 [2024-12-08T05:24:18.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.591 [2024-12-08T05:24:18.710Z] =================================================================================================================== 00:20:28.591 [2024-12-08T05:24:18.710Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:28.591 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1086572 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1086715 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1086715 /var/tmp/bdevperf.sock 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1086715 ']' 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.849 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.849 [2024-12-08 06:24:18.943491] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:20:28.849 [2024-12-08 06:24:18.943581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086715 ] 00:20:29.135 [2024-12-08 06:24:19.009492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.135 [2024-12-08 06:24:19.063532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.135 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.135 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:29.135 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:29.393 [2024-12-08 06:24:19.416545] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:29.393 [2024-12-08 06:24:19.416595] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:29.393 request: 00:20:29.393 { 00:20:29.393 "name": "key0", 00:20:29.393 "path": "", 00:20:29.393 "method": "keyring_file_add_key", 00:20:29.393 "req_id": 1 00:20:29.393 } 00:20:29.393 Got JSON-RPC error response 00:20:29.393 response: 00:20:29.393 { 00:20:29.393 "code": -1, 00:20:29.393 "message": "Operation not permitted" 00:20:29.393 } 00:20:29.393 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:29.652 [2024-12-08 06:24:19.709482] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:29.652 [2024-12-08 06:24:19.709547] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:29.652 request: 00:20:29.652 { 00:20:29.652 "name": "TLSTEST", 00:20:29.652 "trtype": "tcp", 00:20:29.652 "traddr": "10.0.0.2", 00:20:29.652 "adrfam": "ipv4", 00:20:29.652 "trsvcid": "4420", 00:20:29.652 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.652 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:29.652 "prchk_reftag": false, 00:20:29.652 "prchk_guard": false, 00:20:29.652 "hdgst": false, 00:20:29.652 "ddgst": false, 00:20:29.652 "psk": "key0", 00:20:29.652 "allow_unrecognized_csi": false, 00:20:29.652 "method": "bdev_nvme_attach_controller", 00:20:29.652 "req_id": 1 00:20:29.652 } 00:20:29.652 Got JSON-RPC error response 00:20:29.652 response: 00:20:29.652 { 00:20:29.652 "code": -126, 00:20:29.652 "message": "Required key not available" 00:20:29.652 } 00:20:29.652 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1086715 00:20:29.652 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1086715 ']' 00:20:29.652 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1086715 00:20:29.652 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:29.652 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.652 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1086715 00:20:29.652 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:29.652 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:29.652 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1086715' 00:20:29.652 killing process with pid 1086715 00:20:29.652 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1086715 00:20:29.652 Received shutdown signal, test time was about 10.000000 seconds 00:20:29.652 00:20:29.652 Latency(us) 00:20:29.652 [2024-12-08T05:24:19.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.652 [2024-12-08T05:24:19.771Z] =================================================================================================================== 00:20:29.652 [2024-12-08T05:24:19.771Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:29.652 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1086715 00:20:29.911 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:29.911 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:29.911 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:29.911 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:29.911 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:29.911 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1082444 00:20:29.911 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1082444 ']' 00:20:29.911 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1082444 00:20:29.911 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:29.911 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.911 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1082444 00:20:29.911 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:29.911 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:29.911 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1082444' 00:20:29.911 killing process with pid 1082444 00:20:29.911 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1082444 00:20:29.911 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1082444 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.120wRmBAVN 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.120wRmBAVN 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1086870 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1086870 00:20:30.169 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1086870 ']' 00:20:30.170 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.170 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.170 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.170 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.170 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.428 [2024-12-08 06:24:20.330443] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:20:30.428 [2024-12-08 06:24:20.330530] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.428 [2024-12-08 06:24:20.405285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.428 [2024-12-08 06:24:20.464595] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.428 [2024-12-08 06:24:20.464667] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.428 [2024-12-08 06:24:20.464690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.428 [2024-12-08 06:24:20.464702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.428 [2024-12-08 06:24:20.464712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.428 [2024-12-08 06:24:20.465424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.686 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.686 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:30.686 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:30.686 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:30.686 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.686 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.686 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.120wRmBAVN 00:20:30.686 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.120wRmBAVN 00:20:30.686 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:30.944 [2024-12-08 06:24:20.912943] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.944 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:31.202 06:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:31.460 [2024-12-08 06:24:21.522637] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:31.460 [2024-12-08 06:24:21.522938] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.460 06:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:31.718 malloc0 00:20:31.718 06:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:32.281 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.120wRmBAVN 00:20:32.538 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:32.797 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.120wRmBAVN 00:20:32.797 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:32.797 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:32.797 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:32.797 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.120wRmBAVN 00:20:32.797 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:32.797 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1087166 00:20:32.797 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:32.797 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:32.797 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1087166 /var/tmp/bdevperf.sock 00:20:32.797 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1087166 ']' 00:20:32.797 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:32.797 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:32.797 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:32.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:32.797 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:32.797 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.797 [2024-12-08 06:24:22.820460] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:20:32.797 [2024-12-08 06:24:22.820548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1087166 ] 00:20:32.797 [2024-12-08 06:24:22.888313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.055 [2024-12-08 06:24:22.949103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.055 06:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:33.055 06:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:33.055 06:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.120wRmBAVN 00:20:33.313 06:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:33.571 [2024-12-08 06:24:23.624409] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.829 TLSTESTn1 00:20:33.829 06:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:33.829 Running I/O for 10 seconds... 00:20:36.134 3583.00 IOPS, 14.00 MiB/s [2024-12-08T05:24:27.184Z] 3561.50 IOPS, 13.91 MiB/s [2024-12-08T05:24:28.117Z] 3608.00 IOPS, 14.09 MiB/s [2024-12-08T05:24:29.063Z] 3576.50 IOPS, 13.97 MiB/s [2024-12-08T05:24:29.994Z] 3556.80 IOPS, 13.89 MiB/s [2024-12-08T05:24:30.925Z] 3573.67 IOPS, 13.96 MiB/s [2024-12-08T05:24:31.856Z] 3573.14 IOPS, 13.96 MiB/s [2024-12-08T05:24:33.233Z] 3558.12 IOPS, 13.90 MiB/s [2024-12-08T05:24:34.200Z] 3557.22 IOPS, 13.90 MiB/s [2024-12-08T05:24:34.200Z] 3550.80 IOPS, 13.87 MiB/s 00:20:44.081 Latency(us) 00:20:44.081 [2024-12-08T05:24:34.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.081 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:44.081 Verification LBA range: start 0x0 length 0x2000 00:20:44.081 TLSTESTn1 : 10.02 3555.19 13.89 0.00 0.00 35941.70 6407.96 30874.74 00:20:44.081 [2024-12-08T05:24:34.200Z] =================================================================================================================== 00:20:44.081 [2024-12-08T05:24:34.200Z] Total : 3555.19 13.89 0.00 0.00 35941.70 6407.96 30874.74 00:20:44.081 { 00:20:44.081 "results": [ 00:20:44.081 { 00:20:44.081 "job": "TLSTESTn1", 00:20:44.081 "core_mask": "0x4", 00:20:44.081 "workload": "verify", 00:20:44.081 "status": "finished", 00:20:44.081 "verify_range": { 00:20:44.081 "start": 0, 00:20:44.081 "length": 8192 00:20:44.081 }, 00:20:44.081 "queue_depth": 128, 00:20:44.081 "io_size": 4096, 00:20:44.081 "runtime": 10.023381, 00:20:44.081 "iops": 3555.1876158354153, 00:20:44.081 "mibps": 13.887451624357091, 00:20:44.081 "io_failed": 0, 00:20:44.081 "io_timeout": 0, 00:20:44.081 "avg_latency_us": 35941.70125193188, 00:20:44.081 "min_latency_us": 6407.964444444445, 00:20:44.081 "max_latency_us": 30874.737777777777 00:20:44.082 } 00:20:44.082 ], 00:20:44.082 "core_count": 1 00:20:44.082 } 00:20:44.082 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:44.082 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1087166 00:20:44.082 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1087166 ']' 00:20:44.082 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1087166 00:20:44.082 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:44.082 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.082 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1087166 00:20:44.082 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:44.082 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:44.082 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1087166' 00:20:44.082 killing process with pid 1087166 00:20:44.082 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1087166 00:20:44.082 Received shutdown signal, test time was about 10.000000 seconds 00:20:44.082 00:20:44.082 Latency(us) 00:20:44.082 [2024-12-08T05:24:34.201Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.082 [2024-12-08T05:24:34.201Z] =================================================================================================================== 00:20:44.082 [2024-12-08T05:24:34.201Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:44.082 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1087166 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.120wRmBAVN 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.120wRmBAVN 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.120wRmBAVN 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.120wRmBAVN 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.120wRmBAVN 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1088481 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1088481 /var/tmp/bdevperf.sock 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1088481 ']' 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.082 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.363 [2024-12-08 06:24:34.216320] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:20:44.363 [2024-12-08 06:24:34.216409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088481 ] 00:20:44.363 [2024-12-08 06:24:34.287847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.363 [2024-12-08 06:24:34.348114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.363 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.363 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:44.363 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.120wRmBAVN 00:20:44.622 [2024-12-08 06:24:34.732072] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.120wRmBAVN': 0100666 00:20:44.622 [2024-12-08 06:24:34.732119] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:44.622 request: 00:20:44.622 { 00:20:44.622 "name": "key0", 00:20:44.622 "path": "/tmp/tmp.120wRmBAVN", 00:20:44.622 "method": "keyring_file_add_key", 00:20:44.622 "req_id": 1 00:20:44.622 } 00:20:44.622 Got JSON-RPC error response 00:20:44.622 response: 00:20:44.622 { 00:20:44.622 "code": -1, 00:20:44.622 "message": "Operation not permitted" 00:20:44.622 } 00:20:44.879 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:45.143 [2024-12-08 06:24:35.000923] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.143 [2024-12-08 06:24:35.000989] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:45.143 request: 00:20:45.143 { 00:20:45.143 "name": "TLSTEST", 00:20:45.143 "trtype": "tcp", 00:20:45.143 "traddr": "10.0.0.2", 00:20:45.143 "adrfam": "ipv4", 00:20:45.143 "trsvcid": "4420", 00:20:45.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.143 "prchk_reftag": false, 00:20:45.143 "prchk_guard": false, 00:20:45.143 "hdgst": false, 00:20:45.143 "ddgst": false, 00:20:45.143 "psk": "key0", 00:20:45.143 "allow_unrecognized_csi": false, 00:20:45.143 "method": "bdev_nvme_attach_controller", 00:20:45.143 "req_id": 1 00:20:45.143 } 00:20:45.143 Got JSON-RPC error response 00:20:45.143 response: 00:20:45.143 { 00:20:45.143 "code": -126, 00:20:45.143 "message": "Required key not available" 00:20:45.143 } 00:20:45.143 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1088481 00:20:45.143 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1088481 ']' 00:20:45.143 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1088481 00:20:45.143 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:45.143 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.143 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1088481 00:20:45.143 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:45.143 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:45.143 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1088481' 00:20:45.143 killing process with pid 1088481 00:20:45.143 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1088481 00:20:45.143 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.143 00:20:45.143 Latency(us) 00:20:45.143 [2024-12-08T05:24:35.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.143 [2024-12-08T05:24:35.263Z] =================================================================================================================== 00:20:45.144 [2024-12-08T05:24:35.263Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:45.144 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1088481 00:20:45.144 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:45.144 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:45.144 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:45.144 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:45.144 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:45.144 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1086870 00:20:45.144 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1086870 ']' 00:20:45.144 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1086870 00:20:45.144 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:45.144 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.404 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1086870 00:20:45.404 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:45.404 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:45.404 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1086870' 00:20:45.404 killing process with pid 1086870 00:20:45.404 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1086870 00:20:45.404 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1086870 00:20:45.661 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:45.661 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:45.661 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:45.661 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.661 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1088752 00:20:45.661 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:45.661 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1088752 00:20:45.661 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1088752 ']' 00:20:45.661 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.661 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.661 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.661 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.661 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.661 [2024-12-08 06:24:35.590587] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:20:45.661 [2024-12-08 06:24:35.590688] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.661 [2024-12-08 06:24:35.659711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.661 [2024-12-08 06:24:35.710978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.661 [2024-12-08 06:24:35.711058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.661 [2024-12-08 06:24:35.711081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.661 [2024-12-08 06:24:35.711091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.661 [2024-12-08 06:24:35.711100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.661 [2024-12-08 06:24:35.711702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.918 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.918 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:45.918 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:45.918 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:45.918 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.918 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.918 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.120wRmBAVN 00:20:45.918 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:45.918 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.120wRmBAVN 00:20:45.918 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:45.918 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.918 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:45.918 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.918 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.120wRmBAVN 00:20:45.918 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.120wRmBAVN 00:20:45.918 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:46.175 [2024-12-08 06:24:36.100429] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.175 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:46.432 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:46.689 [2024-12-08 06:24:36.653971] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:46.689 [2024-12-08 06:24:36.654251] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.689 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:46.947 malloc0 00:20:46.947 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:47.204 06:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.120wRmBAVN 00:20:47.463 [2024-12-08 06:24:37.450511] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.120wRmBAVN': 0100666 00:20:47.463 [2024-12-08 06:24:37.450558] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:47.463 request: 00:20:47.463 { 00:20:47.463 "name": "key0", 00:20:47.463 "path": "/tmp/tmp.120wRmBAVN", 00:20:47.463 "method": "keyring_file_add_key", 00:20:47.463 "req_id": 1 00:20:47.463 } 00:20:47.463 Got JSON-RPC error response 00:20:47.463 response: 00:20:47.463 { 00:20:47.463 "code": -1, 00:20:47.463 "message": "Operation not permitted" 00:20:47.463 } 00:20:47.463 06:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:47.722 [2024-12-08 06:24:37.731355] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:47.722 [2024-12-08 06:24:37.731425] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:47.722 request: 00:20:47.722 { 00:20:47.722 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.722 "host": "nqn.2016-06.io.spdk:host1", 00:20:47.722 "psk": "key0", 00:20:47.722 "method": "nvmf_subsystem_add_host", 00:20:47.722 "req_id": 1 00:20:47.722 } 00:20:47.722 Got JSON-RPC error response 00:20:47.722 response: 00:20:47.722 { 00:20:47.722 "code": -32603, 00:20:47.722 "message": "Internal error" 00:20:47.722 } 00:20:47.722 06:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:47.722 06:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:47.722 06:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:47.722 06:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:47.722 06:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1088752 00:20:47.722 06:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1088752 ']' 00:20:47.722 06:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1088752 00:20:47.722 06:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:47.722 06:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.722 06:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1088752 00:20:47.722 06:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:47.722 06:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:47.722 06:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1088752' 00:20:47.722 killing process with pid 1088752 00:20:47.722 06:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1088752 00:20:47.722 06:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1088752 00:20:47.980 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.120wRmBAVN 00:20:47.980 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:47.980 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:47.980 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.980 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.980 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1089055 00:20:47.980 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:47.980 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1089055 00:20:47.980 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1089055 ']' 00:20:47.980 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.980 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.980 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.980 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.980 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.980 [2024-12-08 06:24:38.072776] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:20:47.981 [2024-12-08 06:24:38.072863] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.239 [2024-12-08 06:24:38.144603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.239 [2024-12-08 06:24:38.199518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.239 [2024-12-08 06:24:38.199573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.239 [2024-12-08 06:24:38.199596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.239 [2024-12-08 06:24:38.199607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.239 [2024-12-08 06:24:38.199617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.239 [2024-12-08 06:24:38.200269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.239 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.239 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:48.239 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:48.239 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.239 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.239 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.239 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.120wRmBAVN 00:20:48.239 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.120wRmBAVN 00:20:48.239 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:48.497 [2024-12-08 06:24:38.569204] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.497 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:48.757 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:49.018 [2024-12-08 06:24:39.134836] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:49.018 [2024-12-08 06:24:39.135161] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.276 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:49.534 malloc0 00:20:49.534 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:49.793 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.120wRmBAVN 00:20:50.049 06:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:50.306 06:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1089339 00:20:50.306 06:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:50.306 06:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:50.306 06:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1089339 /var/tmp/bdevperf.sock 00:20:50.306 06:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1089339 ']' 00:20:50.306 06:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.306 06:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.306 06:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.306 06:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.306 06:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.565 [2024-12-08 06:24:40.440610] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:20:50.565 [2024-12-08 06:24:40.440696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089339 ] 00:20:50.565 [2024-12-08 06:24:40.508922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.565 [2024-12-08 06:24:40.567257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.565 06:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.565 06:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:50.565 06:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.120wRmBAVN 00:20:51.133 06:24:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:51.133 [2024-12-08 06:24:41.240290] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:51.390 TLSTESTn1 00:20:51.390 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:51.649 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:51.649 "subsystems": [ 00:20:51.649 { 00:20:51.649 "subsystem": "keyring", 00:20:51.649 "config": [ 00:20:51.649 { 00:20:51.649 "method": "keyring_file_add_key", 00:20:51.649 "params": { 00:20:51.649 "name": "key0", 00:20:51.649 "path": "/tmp/tmp.120wRmBAVN" 00:20:51.649 } 00:20:51.649 } 00:20:51.649 ] 00:20:51.649 }, 00:20:51.649 { 00:20:51.649 "subsystem": "iobuf", 00:20:51.649 "config": [ 00:20:51.649 { 00:20:51.649 "method": "iobuf_set_options", 00:20:51.649 "params": { 00:20:51.649 "small_pool_count": 8192, 00:20:51.649 "large_pool_count": 1024, 00:20:51.649 "small_bufsize": 8192, 00:20:51.649 "large_bufsize": 135168, 00:20:51.649 "enable_numa": false 00:20:51.649 } 00:20:51.649 } 00:20:51.649 ] 00:20:51.649 }, 00:20:51.649 { 00:20:51.649 "subsystem": "sock", 00:20:51.649 "config": [ 00:20:51.649 { 00:20:51.649 "method": "sock_set_default_impl", 00:20:51.649 "params": { 00:20:51.649 "impl_name": "posix" 00:20:51.649 } 00:20:51.649 }, 00:20:51.649 { 00:20:51.649 "method": "sock_impl_set_options", 00:20:51.649 "params": { 00:20:51.649 "impl_name": "ssl", 00:20:51.649 "recv_buf_size": 4096, 00:20:51.649 "send_buf_size": 4096, 00:20:51.649 "enable_recv_pipe": true, 00:20:51.649 "enable_quickack": false, 00:20:51.649 "enable_placement_id": 0, 00:20:51.649 "enable_zerocopy_send_server": true, 00:20:51.649 "enable_zerocopy_send_client": false, 00:20:51.649 "zerocopy_threshold": 0, 00:20:51.649 "tls_version": 0, 00:20:51.649 "enable_ktls": false 00:20:51.649 } 00:20:51.649 }, 00:20:51.649 { 00:20:51.649 "method": "sock_impl_set_options", 00:20:51.649 "params": { 00:20:51.649 "impl_name": "posix", 00:20:51.649 "recv_buf_size": 2097152, 00:20:51.649 "send_buf_size": 2097152, 00:20:51.649 "enable_recv_pipe": true, 00:20:51.649 "enable_quickack": false, 00:20:51.649 "enable_placement_id": 0, 00:20:51.649 "enable_zerocopy_send_server": true, 00:20:51.649 "enable_zerocopy_send_client": false, 00:20:51.649 "zerocopy_threshold": 0, 00:20:51.649 "tls_version": 0, 00:20:51.649 "enable_ktls": false 00:20:51.649 } 00:20:51.649 } 00:20:51.649 ] 00:20:51.649 }, 00:20:51.649 { 00:20:51.649 "subsystem": "vmd", 00:20:51.649 "config": [] 00:20:51.649 }, 00:20:51.649 { 00:20:51.649 "subsystem": "accel", 00:20:51.649 "config": [ 00:20:51.649 { 00:20:51.649 "method": "accel_set_options", 00:20:51.649 "params": { 00:20:51.649 "small_cache_size": 128, 00:20:51.649 "large_cache_size": 16, 00:20:51.649 "task_count": 2048, 00:20:51.649 "sequence_count": 2048, 00:20:51.649 "buf_count": 2048 00:20:51.649 } 00:20:51.649 } 00:20:51.649 ] 00:20:51.649 }, 00:20:51.649 { 00:20:51.649 "subsystem": "bdev", 00:20:51.649 "config": [ 00:20:51.649 { 00:20:51.649 "method": "bdev_set_options", 00:20:51.649 "params": { 00:20:51.649 "bdev_io_pool_size": 65535, 00:20:51.649 "bdev_io_cache_size": 256, 00:20:51.649 "bdev_auto_examine": true, 00:20:51.649 "iobuf_small_cache_size": 128, 00:20:51.649 "iobuf_large_cache_size": 16 00:20:51.649 } 00:20:51.649 }, 00:20:51.649 { 00:20:51.649 "method": "bdev_raid_set_options", 00:20:51.649 "params": { 00:20:51.649 "process_window_size_kb": 1024, 00:20:51.649 "process_max_bandwidth_mb_sec": 0 00:20:51.649 } 00:20:51.649 }, 00:20:51.649 { 00:20:51.649 "method": "bdev_iscsi_set_options", 00:20:51.649 "params": { 00:20:51.649 "timeout_sec": 30 00:20:51.649 } 00:20:51.649 }, 00:20:51.649 { 00:20:51.649 "method": "bdev_nvme_set_options", 00:20:51.649 "params": { 00:20:51.649 "action_on_timeout": "none", 00:20:51.649 "timeout_us": 0, 00:20:51.649 "timeout_admin_us": 0, 00:20:51.649 "keep_alive_timeout_ms": 10000, 00:20:51.649 "arbitration_burst": 0, 00:20:51.707 "low_priority_weight": 0, 00:20:51.707 "medium_priority_weight": 0, 00:20:51.707 "high_priority_weight": 0, 00:20:51.707 "nvme_adminq_poll_period_us": 10000, 00:20:51.707 "nvme_ioq_poll_period_us": 0, 00:20:51.707 "io_queue_requests": 0, 00:20:51.707 "delay_cmd_submit": true, 00:20:51.707 "transport_retry_count": 4, 00:20:51.707 "bdev_retry_count": 3, 00:20:51.707 "transport_ack_timeout": 0, 00:20:51.707 "ctrlr_loss_timeout_sec": 0, 00:20:51.707 "reconnect_delay_sec": 0, 00:20:51.707 "fast_io_fail_timeout_sec": 0, 00:20:51.707 "disable_auto_failback": false, 00:20:51.707 "generate_uuids": false, 00:20:51.707 "transport_tos": 0, 00:20:51.707 "nvme_error_stat": false, 00:20:51.707 "rdma_srq_size": 0, 00:20:51.707 "io_path_stat": false, 00:20:51.707 "allow_accel_sequence": false, 00:20:51.707 "rdma_max_cq_size": 0, 00:20:51.707 "rdma_cm_event_timeout_ms": 0, 00:20:51.707 "dhchap_digests": [ 00:20:51.707 "sha256", 00:20:51.707 "sha384", 00:20:51.707 "sha512" 00:20:51.707 ], 00:20:51.707 "dhchap_dhgroups": [ 00:20:51.707 "null", 00:20:51.707 "ffdhe2048", 00:20:51.707 "ffdhe3072", 00:20:51.707 "ffdhe4096", 00:20:51.707 "ffdhe6144", 00:20:51.707 "ffdhe8192" 00:20:51.707 ] 00:20:51.707 } 00:20:51.707 }, 00:20:51.707 { 00:20:51.707 "method": "bdev_nvme_set_hotplug", 00:20:51.707 "params": { 00:20:51.707 "period_us": 100000, 00:20:51.707 "enable": false 00:20:51.707 } 00:20:51.707 }, 00:20:51.707 { 00:20:51.707 "method": "bdev_malloc_create", 00:20:51.707 "params": { 00:20:51.707 "name": "malloc0", 00:20:51.707 "num_blocks": 8192, 00:20:51.707 "block_size": 4096, 00:20:51.707 "physical_block_size": 4096, 00:20:51.707 "uuid": "19c7593a-7733-4ea4-abff-d9d599a6a14c", 00:20:51.707 "optimal_io_boundary": 0, 00:20:51.707 "md_size": 0, 00:20:51.707 "dif_type": 0, 00:20:51.707 "dif_is_head_of_md": false, 00:20:51.707 "dif_pi_format": 0 00:20:51.707 } 00:20:51.707 }, 00:20:51.707 { 00:20:51.707 "method": "bdev_wait_for_examine" 00:20:51.707 } 00:20:51.707 ] 00:20:51.707 }, 00:20:51.707 { 00:20:51.707 "subsystem": "nbd", 00:20:51.707 "config": [] 00:20:51.707 }, 00:20:51.707 { 00:20:51.707 "subsystem": "scheduler", 00:20:51.707 "config": [ 00:20:51.707 { 00:20:51.707 "method": "framework_set_scheduler", 00:20:51.707 "params": { 00:20:51.707 "name": "static" 00:20:51.707 } 00:20:51.707 } 00:20:51.707 ] 00:20:51.707 }, 00:20:51.707 { 00:20:51.707 "subsystem": "nvmf", 00:20:51.707 "config": [ 00:20:51.707 { 00:20:51.707 "method": "nvmf_set_config", 00:20:51.707 "params": { 00:20:51.707 "discovery_filter": "match_any", 00:20:51.707 "admin_cmd_passthru": { 00:20:51.707 "identify_ctrlr": false 00:20:51.707 }, 00:20:51.707 "dhchap_digests": [ 00:20:51.707 "sha256", 00:20:51.707 "sha384", 00:20:51.707 "sha512" 00:20:51.707 ], 00:20:51.707 "dhchap_dhgroups": [ 00:20:51.707 "null", 00:20:51.707 "ffdhe2048", 00:20:51.707 "ffdhe3072", 00:20:51.707 "ffdhe4096", 00:20:51.707 "ffdhe6144", 00:20:51.707 "ffdhe8192" 00:20:51.707 ] 00:20:51.707 } 00:20:51.707 }, 00:20:51.707 { 00:20:51.707 "method": "nvmf_set_max_subsystems", 00:20:51.707 "params": { 00:20:51.707 "max_subsystems": 1024 00:20:51.707 } 00:20:51.707 }, 00:20:51.707 { 00:20:51.707 "method": "nvmf_set_crdt", 00:20:51.707 "params": { 00:20:51.707 "crdt1": 0, 00:20:51.707 "crdt2": 0, 00:20:51.707 "crdt3": 0 00:20:51.707 } 00:20:51.707 }, 00:20:51.707 { 00:20:51.707 "method": "nvmf_create_transport", 00:20:51.707 "params": { 00:20:51.707 "trtype": "TCP", 00:20:51.707 "max_queue_depth": 128, 00:20:51.707 "max_io_qpairs_per_ctrlr": 127, 00:20:51.707 "in_capsule_data_size": 4096, 00:20:51.707 "max_io_size": 131072, 00:20:51.707 "io_unit_size": 131072, 00:20:51.707 "max_aq_depth": 128, 00:20:51.707 "num_shared_buffers": 511, 00:20:51.707 "buf_cache_size": 4294967295, 00:20:51.707 "dif_insert_or_strip": false, 00:20:51.707 "zcopy": false, 00:20:51.707 "c2h_success": false, 00:20:51.707 "sock_priority": 0, 00:20:51.707 "abort_timeout_sec": 1, 00:20:51.707 "ack_timeout": 0, 00:20:51.707 "data_wr_pool_size": 0 00:20:51.707 } 00:20:51.707 }, 00:20:51.707 { 00:20:51.707 "method": "nvmf_create_subsystem", 00:20:51.707 "params": { 00:20:51.707 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.707 "allow_any_host": false, 00:20:51.707 "serial_number": "SPDK00000000000001", 00:20:51.707 "model_number": "SPDK bdev Controller", 00:20:51.707 "max_namespaces": 10, 00:20:51.707 "min_cntlid": 1, 00:20:51.707 "max_cntlid": 65519, 00:20:51.707 "ana_reporting": false 00:20:51.707 } 00:20:51.707 }, 00:20:51.707 { 00:20:51.707 "method": "nvmf_subsystem_add_host", 00:20:51.707 "params": { 00:20:51.707 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.707 "host": "nqn.2016-06.io.spdk:host1", 00:20:51.707 "psk": "key0" 00:20:51.707 } 00:20:51.707 }, 00:20:51.707 { 00:20:51.707 "method": "nvmf_subsystem_add_ns", 00:20:51.707 "params": { 00:20:51.707 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.707 "namespace": { 00:20:51.707 "nsid": 1, 00:20:51.708 "bdev_name": "malloc0", 00:20:51.708 "nguid": "19C7593A77334EA4ABFFD9D599A6A14C", 00:20:51.708 "uuid": "19c7593a-7733-4ea4-abff-d9d599a6a14c", 00:20:51.708 "no_auto_visible": false 00:20:51.708 } 00:20:51.708 } 00:20:51.708 }, 00:20:51.708 { 00:20:51.708 "method": "nvmf_subsystem_add_listener", 00:20:51.708 "params": { 00:20:51.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.708 "listen_address": { 00:20:51.708 "trtype": "TCP", 00:20:51.708 "adrfam": "IPv4", 00:20:51.708 "traddr": "10.0.0.2", 00:20:51.708 "trsvcid": "4420" 00:20:51.708 }, 00:20:51.708 "secure_channel": true 00:20:51.708 } 00:20:51.708 } 00:20:51.708 ] 00:20:51.708 } 00:20:51.708 ] 00:20:51.708 }' 00:20:51.708 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:51.968 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:51.968 "subsystems": [ 00:20:51.968 { 00:20:51.968 "subsystem": "keyring", 00:20:51.968 "config": [ 00:20:51.968 { 00:20:51.968 "method": "keyring_file_add_key", 00:20:51.968 "params": { 00:20:51.968 "name": "key0", 00:20:51.968 "path": "/tmp/tmp.120wRmBAVN" 00:20:51.968 } 00:20:51.968 } 00:20:51.968 ] 00:20:51.968 }, 00:20:51.968 { 00:20:51.968 "subsystem": "iobuf", 00:20:51.968 "config": [ 00:20:51.968 { 00:20:51.968 "method": "iobuf_set_options", 00:20:51.968 "params": { 00:20:51.968 "small_pool_count": 8192, 00:20:51.968 "large_pool_count": 1024, 00:20:51.968 "small_bufsize": 8192, 00:20:51.968 "large_bufsize": 135168, 00:20:51.968 "enable_numa": false 00:20:51.968 } 00:20:51.968 } 00:20:51.968 ] 00:20:51.968 }, 00:20:51.968 { 00:20:51.968 "subsystem": "sock", 00:20:51.968 "config": [ 00:20:51.968 { 00:20:51.968 "method": "sock_set_default_impl", 00:20:51.968 "params": { 00:20:51.968 "impl_name": "posix" 00:20:51.968 } 00:20:51.968 }, 00:20:51.968 { 00:20:51.968 "method": "sock_impl_set_options", 00:20:51.968 "params": { 00:20:51.968 "impl_name": "ssl", 00:20:51.968 "recv_buf_size": 4096, 00:20:51.968 "send_buf_size": 4096, 00:20:51.968 "enable_recv_pipe": true, 00:20:51.968 "enable_quickack": false, 00:20:51.968 "enable_placement_id": 0, 00:20:51.968 "enable_zerocopy_send_server": true, 00:20:51.968 "enable_zerocopy_send_client": false, 00:20:51.968 "zerocopy_threshold": 0, 00:20:51.968 "tls_version": 0, 00:20:51.968 "enable_ktls": false 00:20:51.968 } 00:20:51.968 }, 00:20:51.968 { 00:20:51.968 "method": "sock_impl_set_options", 00:20:51.968 "params": { 00:20:51.968 "impl_name": "posix", 00:20:51.968 "recv_buf_size": 2097152, 00:20:51.968 "send_buf_size": 2097152, 00:20:51.968 "enable_recv_pipe": true, 00:20:51.968 "enable_quickack": false, 00:20:51.968 "enable_placement_id": 0, 00:20:51.968 "enable_zerocopy_send_server": true, 00:20:51.968 "enable_zerocopy_send_client": false, 00:20:51.968 "zerocopy_threshold": 0, 00:20:51.968 "tls_version": 0, 00:20:51.968 "enable_ktls": false 00:20:51.968 } 00:20:51.968 } 00:20:51.968 ] 00:20:51.968 }, 00:20:51.968 { 00:20:51.968 "subsystem": "vmd", 00:20:51.968 "config": [] 00:20:51.968 }, 00:20:51.968 { 00:20:51.968 "subsystem": "accel", 00:20:51.968 "config": [ 00:20:51.968 { 00:20:51.968 "method": "accel_set_options", 00:20:51.968 "params": { 00:20:51.968 "small_cache_size": 128, 00:20:51.968 "large_cache_size": 16, 00:20:51.968 "task_count": 2048, 00:20:51.968 "sequence_count": 2048, 00:20:51.968 "buf_count": 2048 00:20:51.968 } 00:20:51.968 } 00:20:51.968 ] 00:20:51.968 }, 00:20:51.968 { 00:20:51.968 "subsystem": "bdev", 00:20:51.968 "config": [ 00:20:51.968 { 00:20:51.968 "method": "bdev_set_options", 00:20:51.968 "params": { 00:20:51.968 "bdev_io_pool_size": 65535, 00:20:51.968 "bdev_io_cache_size": 256, 00:20:51.968 "bdev_auto_examine": true, 00:20:51.968 "iobuf_small_cache_size": 128, 00:20:51.968 "iobuf_large_cache_size": 16 00:20:51.968 } 00:20:51.968 }, 00:20:51.968 { 00:20:51.968 "method": "bdev_raid_set_options", 00:20:51.968 "params": { 00:20:51.968 "process_window_size_kb": 1024, 00:20:51.968 "process_max_bandwidth_mb_sec": 0 00:20:51.968 } 00:20:51.968 }, 00:20:51.968 { 00:20:51.968 "method": "bdev_iscsi_set_options", 00:20:51.968 "params": { 00:20:51.968 "timeout_sec": 30 00:20:51.968 } 00:20:51.968 }, 00:20:51.968 { 00:20:51.968 "method": "bdev_nvme_set_options", 00:20:51.968 "params": { 00:20:51.968 "action_on_timeout": "none", 00:20:51.968 "timeout_us": 0, 00:20:51.968 "timeout_admin_us": 0, 00:20:51.968 "keep_alive_timeout_ms": 10000, 00:20:51.968 "arbitration_burst": 0, 00:20:51.968 "low_priority_weight": 0, 00:20:51.969 "medium_priority_weight": 0, 00:20:51.969 "high_priority_weight": 0, 00:20:51.969 "nvme_adminq_poll_period_us": 10000, 00:20:51.969 "nvme_ioq_poll_period_us": 0, 00:20:51.969 "io_queue_requests": 512, 00:20:51.969 "delay_cmd_submit": true, 00:20:51.969 "transport_retry_count": 4, 00:20:51.969 "bdev_retry_count": 3, 00:20:51.969 "transport_ack_timeout": 0, 00:20:51.969 "ctrlr_loss_timeout_sec": 0, 00:20:51.969 "reconnect_delay_sec": 0, 00:20:51.969 "fast_io_fail_timeout_sec": 0, 00:20:51.969 "disable_auto_failback": false, 00:20:51.969 "generate_uuids": false, 00:20:51.969 "transport_tos": 0, 00:20:51.969 "nvme_error_stat": false, 00:20:51.969 "rdma_srq_size": 0, 00:20:51.969 "io_path_stat": false, 00:20:51.969 "allow_accel_sequence": false, 00:20:51.969 "rdma_max_cq_size": 0, 00:20:51.969 "rdma_cm_event_timeout_ms": 0, 00:20:51.969 "dhchap_digests": [ 00:20:51.969 "sha256", 00:20:51.969 "sha384", 00:20:51.969 "sha512" 00:20:51.969 ], 00:20:51.969 "dhchap_dhgroups": [ 00:20:51.969 "null", 00:20:51.969 "ffdhe2048", 00:20:51.969 "ffdhe3072", 00:20:51.969 "ffdhe4096", 00:20:51.969 "ffdhe6144", 00:20:51.969 "ffdhe8192" 00:20:51.969 ] 00:20:51.969 } 00:20:51.969 }, 00:20:51.969 { 00:20:51.969 "method": "bdev_nvme_attach_controller", 00:20:51.969 "params": { 00:20:51.969 "name": "TLSTEST", 00:20:51.969 "trtype": "TCP", 00:20:51.969 "adrfam": "IPv4", 00:20:51.969 "traddr": "10.0.0.2", 00:20:51.969 "trsvcid": "4420", 00:20:51.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.969 "prchk_reftag": false, 00:20:51.969 "prchk_guard": false, 00:20:51.969 "ctrlr_loss_timeout_sec": 0, 00:20:51.969 "reconnect_delay_sec": 0, 00:20:51.969 "fast_io_fail_timeout_sec": 0, 00:20:51.969 "psk": "key0", 00:20:51.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:51.969 "hdgst": false, 00:20:51.969 "ddgst": false, 00:20:51.969 "multipath": "multipath" 00:20:51.969 } 00:20:51.969 }, 00:20:51.969 { 00:20:51.969 "method": "bdev_nvme_set_hotplug", 00:20:51.969 "params": { 00:20:51.969 "period_us": 100000, 00:20:51.969 "enable": false 00:20:51.969 } 00:20:51.969 }, 00:20:51.969 { 00:20:51.969 "method": "bdev_wait_for_examine" 00:20:51.969 } 00:20:51.969 ] 00:20:51.969 }, 00:20:51.969 { 00:20:51.969 "subsystem": "nbd", 00:20:51.969 "config": [] 00:20:51.969 } 00:20:51.969 ] 00:20:51.969 }' 00:20:51.969 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1089339 00:20:51.969 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1089339 ']' 00:20:51.969 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1089339 00:20:51.969 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:51.969 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.969 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1089339 00:20:51.969 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:51.969 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:51.969 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1089339' 00:20:51.969 killing process with pid 1089339 00:20:51.969 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1089339 00:20:51.969 Received shutdown signal, test time was about 10.000000 seconds 00:20:51.969 00:20:51.969 Latency(us) 00:20:51.969 [2024-12-08T05:24:42.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.969 [2024-12-08T05:24:42.088Z] =================================================================================================================== 00:20:51.969 [2024-12-08T05:24:42.088Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:51.969 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1089339 00:20:52.234 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1089055 00:20:52.234 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1089055 ']' 00:20:52.234 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1089055 00:20:52.234 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:52.234 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.234 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1089055 00:20:52.234 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:52.234 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:52.234 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1089055' 00:20:52.234 killing process with pid 1089055 00:20:52.234 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1089055 00:20:52.234 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1089055 00:20:52.495 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:52.495 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:52.495 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:52.495 "subsystems": [ 00:20:52.495 { 00:20:52.495 "subsystem": "keyring", 00:20:52.495 "config": [ 00:20:52.495 { 00:20:52.495 "method": "keyring_file_add_key", 00:20:52.495 "params": { 00:20:52.495 "name": "key0", 00:20:52.495 "path": "/tmp/tmp.120wRmBAVN" 00:20:52.495 } 00:20:52.495 } 00:20:52.495 ] 00:20:52.495 }, 00:20:52.495 { 00:20:52.495 "subsystem": "iobuf", 00:20:52.495 "config": [ 00:20:52.495 { 00:20:52.495 "method": "iobuf_set_options", 00:20:52.495 "params": { 00:20:52.495 "small_pool_count": 8192, 00:20:52.495 "large_pool_count": 1024, 00:20:52.495 "small_bufsize": 8192, 00:20:52.495 "large_bufsize": 135168, 00:20:52.495 "enable_numa": false 00:20:52.495 } 00:20:52.495 } 00:20:52.495 ] 00:20:52.495 }, 00:20:52.495 { 00:20:52.495 "subsystem": "sock", 00:20:52.495 "config": [ 00:20:52.495 { 00:20:52.495 "method": "sock_set_default_impl", 00:20:52.495 "params": { 00:20:52.495 "impl_name": "posix" 00:20:52.495 } 00:20:52.495 }, 00:20:52.495 { 00:20:52.495 "method": "sock_impl_set_options", 00:20:52.495 "params": { 00:20:52.495 "impl_name": "ssl", 00:20:52.495 "recv_buf_size": 4096, 00:20:52.495 "send_buf_size": 4096, 00:20:52.495 "enable_recv_pipe": true, 00:20:52.495 "enable_quickack": false, 00:20:52.495 "enable_placement_id": 0, 00:20:52.495 "enable_zerocopy_send_server": true, 00:20:52.495 "enable_zerocopy_send_client": false, 00:20:52.495 "zerocopy_threshold": 0, 00:20:52.495 "tls_version": 0, 00:20:52.495 "enable_ktls": false 00:20:52.495 } 00:20:52.495 }, 00:20:52.495 { 00:20:52.495 "method": "sock_impl_set_options", 00:20:52.495 "params": { 00:20:52.495 "impl_name": "posix", 00:20:52.495 "recv_buf_size": 2097152, 00:20:52.495 "send_buf_size": 2097152, 00:20:52.495 "enable_recv_pipe": true, 00:20:52.495 "enable_quickack": false, 00:20:52.495 "enable_placement_id": 0, 00:20:52.495 "enable_zerocopy_send_server": true, 00:20:52.495 "enable_zerocopy_send_client": false, 00:20:52.495 "zerocopy_threshold": 0, 00:20:52.495 "tls_version": 0, 00:20:52.495 "enable_ktls": false 00:20:52.495 } 00:20:52.495 } 00:20:52.495 ] 00:20:52.495 }, 00:20:52.495 { 00:20:52.495 "subsystem": "vmd", 00:20:52.495 "config": [] 00:20:52.495 }, 00:20:52.495 { 00:20:52.495 "subsystem": "accel", 00:20:52.495 "config": [ 00:20:52.495 { 00:20:52.495 "method": "accel_set_options", 00:20:52.495 "params": { 00:20:52.495 "small_cache_size": 128, 00:20:52.495 "large_cache_size": 16, 00:20:52.495 "task_count": 2048, 00:20:52.495 "sequence_count": 2048, 00:20:52.495 "buf_count": 2048 00:20:52.495 } 00:20:52.495 } 00:20:52.495 ] 00:20:52.495 }, 00:20:52.495 { 00:20:52.495 "subsystem": "bdev", 00:20:52.495 "config": [ 00:20:52.495 { 00:20:52.495 "method": "bdev_set_options", 00:20:52.495 "params": { 00:20:52.495 "bdev_io_pool_size": 65535, 00:20:52.495 "bdev_io_cache_size": 256, 00:20:52.495 "bdev_auto_examine": true, 00:20:52.495 "iobuf_small_cache_size": 128, 00:20:52.495 "iobuf_large_cache_size": 16 00:20:52.495 } 00:20:52.495 }, 00:20:52.495 { 00:20:52.495 "method": "bdev_raid_set_options", 00:20:52.495 "params": { 00:20:52.495 "process_window_size_kb": 1024, 00:20:52.495 "process_max_bandwidth_mb_sec": 0 00:20:52.495 } 00:20:52.495 }, 00:20:52.495 { 00:20:52.495 "method": "bdev_iscsi_set_options", 00:20:52.495 "params": { 00:20:52.495 "timeout_sec": 30 00:20:52.495 } 00:20:52.495 }, 00:20:52.495 { 00:20:52.495 "method": "bdev_nvme_set_options", 00:20:52.495 "params": { 00:20:52.495 "action_on_timeout": "none", 00:20:52.495 "timeout_us": 0, 00:20:52.495 "timeout_admin_us": 0, 00:20:52.495 "keep_alive_timeout_ms": 10000, 00:20:52.495 "arbitration_burst": 0, 00:20:52.495 "low_priority_weight": 0, 00:20:52.495 "medium_priority_weight": 0, 00:20:52.495 "high_priority_weight": 0, 00:20:52.495 "nvme_adminq_poll_period_us": 10000, 00:20:52.495 "nvme_ioq_poll_period_us": 0, 00:20:52.495 "io_queue_requests": 0, 00:20:52.495 "delay_cmd_submit": true, 00:20:52.495 "transport_retry_count": 4, 00:20:52.495 "bdev_retry_count": 3, 00:20:52.495 "transport_ack_timeout": 0, 00:20:52.495 "ctrlr_loss_timeout_sec": 0, 00:20:52.495 "reconnect_delay_sec": 0, 00:20:52.495 "fast_io_fail_timeout_sec": 0, 00:20:52.496 "disable_auto_failback": false, 00:20:52.496 "generate_uuids": false, 00:20:52.496 "transport_tos": 0, 00:20:52.496 "nvme_error_stat": false, 00:20:52.496 "rdma_srq_size": 0, 00:20:52.496 "io_path_stat": false, 00:20:52.496 "allow_accel_sequence": false, 00:20:52.496 "rdma_max_cq_size": 0, 00:20:52.496 "rdma_cm_event_timeout_ms": 0, 00:20:52.496 "dhchap_digests": [ 00:20:52.496 "sha256", 00:20:52.496 "sha384", 00:20:52.496 "sha512" 00:20:52.496 ], 00:20:52.496 "dhchap_dhgroups": [ 00:20:52.496 "null", 00:20:52.496 "ffdhe2048", 00:20:52.496 "ffdhe3072", 00:20:52.496 "ffdhe4096", 00:20:52.496 "ffdhe6144", 00:20:52.496 "ffdhe8192" 00:20:52.496 ] 00:20:52.496 } 00:20:52.496 }, 00:20:52.496 { 00:20:52.496 "method": "bdev_nvme_set_hotplug", 00:20:52.496 "params": { 00:20:52.496 "period_us": 100000, 00:20:52.496 "enable": false 00:20:52.496 } 00:20:52.496 }, 00:20:52.496 { 00:20:52.496 "method": "bdev_malloc_create", 00:20:52.496 "params": { 00:20:52.496 "name": "malloc0", 00:20:52.496 "num_blocks": 8192, 00:20:52.496 "block_size": 4096, 00:20:52.496 "physical_block_size": 4096, 00:20:52.496 "uuid": "19c7593a-7733-4ea4-abff-d9d599a6a14c", 00:20:52.496 "optimal_io_boundary": 0, 00:20:52.496 "md_size": 0, 00:20:52.496 "dif_type": 0, 00:20:52.496 "dif_is_head_of_md": false, 00:20:52.496 "dif_pi_format": 0 00:20:52.496 } 00:20:52.496 }, 00:20:52.496 { 00:20:52.496 "method": "bdev_wait_for_examine" 00:20:52.496 } 00:20:52.496 ] 00:20:52.496 }, 00:20:52.496 { 00:20:52.496 "subsystem": "nbd", 00:20:52.496 "config": [] 00:20:52.496 }, 00:20:52.496 { 00:20:52.496 "subsystem": "scheduler", 00:20:52.496 "config": [ 00:20:52.496 { 00:20:52.496 "method": "framework_set_scheduler", 00:20:52.496 "params": { 00:20:52.496 "name": "static" 00:20:52.496 } 00:20:52.496 } 00:20:52.496 ] 00:20:52.496 }, 00:20:52.496 { 00:20:52.496 "subsystem": "nvmf", 00:20:52.496 "config": [ 00:20:52.496 { 00:20:52.496 "method": "nvmf_set_config", 00:20:52.496 "params": { 00:20:52.496 "discovery_filter": "match_any", 00:20:52.496 "admin_cmd_passthru": { 00:20:52.496 "identify_ctrlr": false 00:20:52.496 }, 00:20:52.496 "dhchap_digests": [ 00:20:52.496 "sha256", 00:20:52.496 "sha384", 00:20:52.496 "sha512" 00:20:52.496 ], 00:20:52.496 "dhchap_dhgroups": [ 00:20:52.496 "null", 00:20:52.496 "ffdhe2048", 00:20:52.496 "ffdhe3072", 00:20:52.496 "ffdhe4096", 00:20:52.496 "ffdhe6144", 00:20:52.496 "ffdhe8192" 00:20:52.496 ] 00:20:52.496 } 00:20:52.496 }, 00:20:52.496 { 00:20:52.496 "method": "nvmf_set_max_subsystems", 00:20:52.496 "params": { 00:20:52.496 "max_subsystems": 1024 00:20:52.496 } 00:20:52.496 }, 00:20:52.496 { 00:20:52.496 "method": "nvmf_set_crdt", 00:20:52.496 "params": { 00:20:52.496 "crdt1": 0, 00:20:52.496 "crdt2": 0, 00:20:52.496 "crdt3": 0 00:20:52.496 } 00:20:52.496 }, 00:20:52.496 { 00:20:52.496 "method": "nvmf_create_transport", 00:20:52.496 "params": { 00:20:52.496 "trtype": "TCP", 00:20:52.496 "max_queue_depth": 128, 00:20:52.496 "max_io_qpairs_per_ctrlr": 127, 00:20:52.496 "in_capsule_data_size": 4096, 00:20:52.496 "max_io_size": 131072, 00:20:52.496 "io_unit_size": 131072, 00:20:52.496 "max_aq_depth": 128, 00:20:52.496 "num_shared_buffers": 511, 00:20:52.496 "buf_cache_size": 4294967295, 00:20:52.496 "dif_insert_or_strip": false, 00:20:52.496 "zcopy": false, 00:20:52.496 "c2h_success": false, 00:20:52.496 "sock_priority": 0, 00:20:52.496 "abort_timeout_sec": 1, 00:20:52.496 "ack_timeout": 0, 00:20:52.496 "data_wr_pool_size": 0 00:20:52.496 } 00:20:52.496 }, 00:20:52.496 { 00:20:52.496 "method": "nvmf_create_subsystem", 00:20:52.496 "params": { 00:20:52.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.496 "allow_any_host": false, 00:20:52.496 "serial_number": "SPDK00000000000001", 00:20:52.496 "model_number": "SPDK bdev Controller", 00:20:52.496 "max_namespaces": 10, 00:20:52.496 "min_cntlid": 1, 00:20:52.496 "max_cntlid": 65519, 00:20:52.496 "ana_reporting": false 00:20:52.496 } 00:20:52.496 }, 00:20:52.496 { 00:20:52.496 "method": "nvmf_subsystem_add_host", 00:20:52.496 "params": { 00:20:52.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.496 "host": "nqn.2016-06.io.spdk:host1", 00:20:52.496 "psk": "key0" 00:20:52.496 } 00:20:52.496 }, 00:20:52.496 { 00:20:52.496 "method": "nvmf_subsystem_add_ns", 00:20:52.496 "params": { 00:20:52.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.496 "namespace": { 00:20:52.496 "nsid": 1, 00:20:52.496 "bdev_name": "malloc0", 00:20:52.496 "nguid": "19C7593A77334EA4ABFFD9D599A6A14C", 00:20:52.496 "uuid": "19c7593a-7733-4ea4-abff-d9d599a6a14c", 00:20:52.496 "no_auto_visible": false 00:20:52.496 } 00:20:52.496 } 00:20:52.496 }, 00:20:52.496 { 00:20:52.496 "method": "nvmf_subsystem_add_listener", 00:20:52.496 "params": { 00:20:52.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.496 "listen_address": { 00:20:52.496 "trtype": "TCP", 00:20:52.496 "adrfam": "IPv4", 00:20:52.496 "traddr": "10.0.0.2", 00:20:52.496 "trsvcid": "4420" 00:20:52.496 }, 00:20:52.496 "secure_channel": true 00:20:52.496 } 00:20:52.496 } 00:20:52.496 ] 00:20:52.496 } 00:20:52.496 ] 00:20:52.496 }' 00:20:52.496 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:52.496 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.496 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1089618 00:20:52.496 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:52.496 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1089618 00:20:52.496 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1089618 ']' 00:20:52.496 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.496 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.496 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.496 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.496 06:24:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.496 [2024-12-08 06:24:42.574283] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:20:52.496 [2024-12-08 06:24:42.574386] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.755 [2024-12-08 06:24:42.648612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.755 [2024-12-08 06:24:42.706153] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.755 [2024-12-08 06:24:42.706229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.755 [2024-12-08 06:24:42.706243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.755 [2024-12-08 06:24:42.706254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.755 [2024-12-08 06:24:42.706264] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.755 [2024-12-08 06:24:42.707026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.013 [2024-12-08 06:24:42.955923] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.013 [2024-12-08 06:24:42.987913] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:53.014 [2024-12-08 06:24:42.988207] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.578 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.578 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:53.578 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:53.578 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:53.578 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.578 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.578 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1089771 00:20:53.578 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1089771 /var/tmp/bdevperf.sock 00:20:53.578 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1089771 ']' 00:20:53.578 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.578 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:53.578 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.578 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:53.578 "subsystems": [ 00:20:53.578 { 00:20:53.578 "subsystem": "keyring", 00:20:53.578 "config": [ 00:20:53.578 { 00:20:53.578 "method": "keyring_file_add_key", 00:20:53.578 "params": { 00:20:53.578 "name": "key0", 00:20:53.578 "path": "/tmp/tmp.120wRmBAVN" 00:20:53.578 } 00:20:53.578 } 00:20:53.578 ] 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "subsystem": "iobuf", 00:20:53.578 "config": [ 00:20:53.578 { 00:20:53.578 "method": "iobuf_set_options", 00:20:53.578 "params": { 00:20:53.578 "small_pool_count": 8192, 00:20:53.578 "large_pool_count": 1024, 00:20:53.578 "small_bufsize": 8192, 00:20:53.578 "large_bufsize": 135168, 00:20:53.578 "enable_numa": false 00:20:53.578 } 00:20:53.578 } 00:20:53.578 ] 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "subsystem": "sock", 00:20:53.578 "config": [ 00:20:53.578 { 00:20:53.578 "method": "sock_set_default_impl", 00:20:53.578 "params": { 00:20:53.578 "impl_name": "posix" 00:20:53.578 } 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "method": "sock_impl_set_options", 00:20:53.578 "params": { 00:20:53.578 "impl_name": "ssl", 00:20:53.578 "recv_buf_size": 4096, 00:20:53.578 "send_buf_size": 4096, 00:20:53.578 "enable_recv_pipe": true, 00:20:53.578 "enable_quickack": false, 00:20:53.578 "enable_placement_id": 0, 00:20:53.578 "enable_zerocopy_send_server": true, 00:20:53.578 "enable_zerocopy_send_client": false, 00:20:53.578 "zerocopy_threshold": 0, 00:20:53.578 "tls_version": 0, 00:20:53.578 "enable_ktls": false 00:20:53.578 } 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "method": "sock_impl_set_options", 00:20:53.578 "params": { 00:20:53.578 "impl_name": "posix", 00:20:53.578 "recv_buf_size": 2097152, 00:20:53.578 "send_buf_size": 2097152, 00:20:53.578 "enable_recv_pipe": true, 00:20:53.578 "enable_quickack": false, 00:20:53.578 "enable_placement_id": 0, 00:20:53.578 "enable_zerocopy_send_server": true, 00:20:53.578 "enable_zerocopy_send_client": false, 00:20:53.578 "zerocopy_threshold": 0, 00:20:53.578 "tls_version": 0, 00:20:53.578 "enable_ktls": false 00:20:53.578 } 00:20:53.578 } 00:20:53.578 ] 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "subsystem": "vmd", 00:20:53.578 "config": [] 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "subsystem": "accel", 00:20:53.578 "config": [ 00:20:53.578 { 00:20:53.578 "method": "accel_set_options", 00:20:53.578 "params": { 00:20:53.578 "small_cache_size": 128, 00:20:53.578 "large_cache_size": 16, 00:20:53.578 "task_count": 2048, 00:20:53.578 "sequence_count": 2048, 00:20:53.578 "buf_count": 2048 00:20:53.578 } 00:20:53.578 } 00:20:53.578 ] 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "subsystem": "bdev", 00:20:53.578 "config": [ 00:20:53.578 { 00:20:53.578 "method": "bdev_set_options", 00:20:53.578 "params": { 00:20:53.578 "bdev_io_pool_size": 65535, 00:20:53.578 "bdev_io_cache_size": 256, 00:20:53.578 "bdev_auto_examine": true, 00:20:53.578 "iobuf_small_cache_size": 128, 00:20:53.578 "iobuf_large_cache_size": 16 00:20:53.578 } 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "method": "bdev_raid_set_options", 00:20:53.578 "params": { 00:20:53.578 "process_window_size_kb": 1024, 00:20:53.578 "process_max_bandwidth_mb_sec": 0 00:20:53.578 } 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "method": "bdev_iscsi_set_options", 00:20:53.578 "params": { 00:20:53.578 "timeout_sec": 30 00:20:53.578 } 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "method": "bdev_nvme_set_options", 00:20:53.578 "params": { 00:20:53.578 "action_on_timeout": "none", 00:20:53.578 "timeout_us": 0, 00:20:53.578 "timeout_admin_us": 0, 00:20:53.578 "keep_alive_timeout_ms": 10000, 00:20:53.578 "arbitration_burst": 0, 00:20:53.578 "low_priority_weight": 0, 00:20:53.578 "medium_priority_weight": 0, 00:20:53.578 "high_priority_weight": 0, 00:20:53.578 "nvme_adminq_poll_period_us": 10000, 00:20:53.578 "nvme_ioq_poll_period_us": 0, 00:20:53.578 "io_queue_requests": 512, 00:20:53.578 "delay_cmd_submit": true, 00:20:53.578 "transport_retry_count": 4, 00:20:53.578 "bdev_retry_count": 3, 00:20:53.578 "transport_ack_timeout": 0, 00:20:53.578 "ctrlr_loss_timeout_sec": 0, 00:20:53.578 "reconnect_delay_sec": 0, 00:20:53.578 "fast_io_fail_timeout_sec": 0, 00:20:53.578 "disable_auto_failback": false, 00:20:53.578 "generate_uuids": false, 00:20:53.578 "transport_tos": 0, 00:20:53.578 "nvme_error_stat": false, 00:20:53.578 "rdma_srq_size": 0, 00:20:53.578 "io_path_stat": false, 00:20:53.578 "allow_accel_sequence": false, 00:20:53.578 "rdma_max_cq_size": 0, 00:20:53.578 "rdma_cm_event_timeout_ms": 0 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.578 , 00:20:53.578 "dhchap_digests": [ 00:20:53.578 "sha256", 00:20:53.578 "sha384", 00:20:53.578 "sha512" 00:20:53.578 ], 00:20:53.578 "dhchap_dhgroups": [ 00:20:53.578 "null", 00:20:53.578 "ffdhe2048", 00:20:53.578 "ffdhe3072", 00:20:53.578 "ffdhe4096", 00:20:53.578 "ffdhe6144", 00:20:53.578 "ffdhe8192" 00:20:53.578 ] 00:20:53.578 } 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "method": "bdev_nvme_attach_controller", 00:20:53.578 "params": { 00:20:53.578 "name": "TLSTEST", 00:20:53.578 "trtype": "TCP", 00:20:53.578 "adrfam": "IPv4", 00:20:53.578 "traddr": "10.0.0.2", 00:20:53.578 "trsvcid": "4420", 00:20:53.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.578 "prchk_reftag": false, 00:20:53.578 "prchk_guard": false, 00:20:53.578 "ctrlr_loss_timeout_sec": 0, 00:20:53.578 "reconnect_delay_sec": 0, 00:20:53.578 "fast_io_fail_timeout_sec": 0, 00:20:53.578 "psk": "key0", 00:20:53.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:53.578 "hdgst": false, 00:20:53.578 "ddgst": false, 00:20:53.578 "multipath": "multipath" 00:20:53.578 } 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "method": "bdev_nvme_set_hotplug", 00:20:53.578 "params": { 00:20:53.578 "period_us": 100000, 00:20:53.578 "enable": false 00:20:53.578 } 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "method": "bdev_wait_for_examine" 00:20:53.578 } 00:20:53.578 ] 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "subsystem": "nbd", 00:20:53.578 "config": [] 00:20:53.578 } 00:20:53.578 ] 00:20:53.578 }' 00:20:53.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.578 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.578 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.578 [2024-12-08 06:24:43.697141] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:20:53.854 [2024-12-08 06:24:43.697235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089771 ] 00:20:53.854 [2024-12-08 06:24:43.763157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.854 [2024-12-08 06:24:43.819603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.112 [2024-12-08 06:24:44.002744] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:54.112 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.112 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:54.112 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:54.371 Running I/O for 10 seconds... 00:20:56.244 3626.00 IOPS, 14.16 MiB/s [2024-12-08T05:24:47.303Z] 3579.00 IOPS, 13.98 MiB/s [2024-12-08T05:24:48.686Z] 3554.00 IOPS, 13.88 MiB/s [2024-12-08T05:24:49.625Z] 3536.00 IOPS, 13.81 MiB/s [2024-12-08T05:24:50.561Z] 3532.00 IOPS, 13.80 MiB/s [2024-12-08T05:24:51.504Z] 3552.67 IOPS, 13.88 MiB/s [2024-12-08T05:24:52.444Z] 3532.71 IOPS, 13.80 MiB/s [2024-12-08T05:24:53.380Z] 3537.50 IOPS, 13.82 MiB/s [2024-12-08T05:24:54.315Z] 3547.56 IOPS, 13.86 MiB/s [2024-12-08T05:24:54.315Z] 3550.10 IOPS, 13.87 MiB/s 00:21:04.196 Latency(us) 00:21:04.196 [2024-12-08T05:24:54.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.196 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:04.196 Verification LBA range: start 0x0 length 0x2000 00:21:04.196 TLSTESTn1 : 10.02 3554.51 13.88 0.00 0.00 35948.06 9854.67 32234.00 00:21:04.196 [2024-12-08T05:24:54.315Z] =================================================================================================================== 00:21:04.196 [2024-12-08T05:24:54.315Z] Total : 3554.51 13.88 0.00 0.00 35948.06 9854.67 32234.00 00:21:04.196 { 00:21:04.196 "results": [ 00:21:04.196 { 00:21:04.196 "job": "TLSTESTn1", 00:21:04.196 "core_mask": "0x4", 00:21:04.196 "workload": "verify", 00:21:04.196 "status": "finished", 00:21:04.196 "verify_range": { 00:21:04.196 "start": 0, 00:21:04.196 "length": 8192 00:21:04.196 }, 00:21:04.196 "queue_depth": 128, 00:21:04.196 "io_size": 4096, 00:21:04.196 "runtime": 10.023043, 00:21:04.196 "iops": 3554.509344118348, 00:21:04.196 "mibps": 13.884802125462297, 00:21:04.196 "io_failed": 0, 00:21:04.196 "io_timeout": 0, 00:21:04.196 "avg_latency_us": 35948.05777274623, 00:21:04.196 "min_latency_us": 9854.672592592593, 00:21:04.196 "max_latency_us": 32234.002962962964 00:21:04.196 } 00:21:04.196 ], 00:21:04.196 "core_count": 1 00:21:04.196 } 00:21:04.196 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:04.196 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1089771 00:21:04.196 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1089771 ']' 00:21:04.196 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1089771 00:21:04.196 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:04.196 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.196 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1089771 00:21:04.502 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:04.502 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:04.502 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1089771' 00:21:04.502 killing process with pid 1089771 00:21:04.502 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1089771 00:21:04.502 Received shutdown signal, test time was about 10.000000 seconds 00:21:04.502 00:21:04.502 Latency(us) 00:21:04.502 [2024-12-08T05:24:54.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.502 [2024-12-08T05:24:54.621Z] =================================================================================================================== 00:21:04.502 [2024-12-08T05:24:54.621Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.502 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1089771 00:21:04.502 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1089618 00:21:04.502 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1089618 ']' 00:21:04.502 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1089618 00:21:04.502 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:04.502 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.502 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1089618 00:21:04.502 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:04.502 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:04.502 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1089618' 00:21:04.502 killing process with pid 1089618 00:21:04.502 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1089618 00:21:04.503 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1089618 00:21:04.760 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:04.760 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:04.760 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.760 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.760 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1091093 00:21:04.760 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:04.760 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1091093 00:21:04.760 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1091093 ']' 00:21:04.760 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.760 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.760 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.760 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.760 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.016 [2024-12-08 06:24:54.885317] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:21:05.016 [2024-12-08 06:24:54.885401] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.016 [2024-12-08 06:24:54.956827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.016 [2024-12-08 06:24:55.011667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.016 [2024-12-08 06:24:55.011753] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.016 [2024-12-08 06:24:55.011769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.016 [2024-12-08 06:24:55.011781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.016 [2024-12-08 06:24:55.011790] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.016 [2024-12-08 06:24:55.012403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.016 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.016 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:05.016 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:05.016 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:05.016 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.275 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.275 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.120wRmBAVN 00:21:05.275 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.120wRmBAVN 00:21:05.275 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:05.275 [2024-12-08 06:24:55.379194] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.535 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:05.791 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:05.791 [2024-12-08 06:24:55.900641] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:05.791 [2024-12-08 06:24:55.900950] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.049 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:06.307 malloc0 00:21:06.307 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:06.564 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.120wRmBAVN 00:21:06.822 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:07.081 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1091372 00:21:07.081 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:07.081 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:07.081 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1091372 /var/tmp/bdevperf.sock 00:21:07.081 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1091372 ']' 00:21:07.081 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.081 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.081 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.081 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.081 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.081 [2024-12-08 06:24:57.032396] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:21:07.081 [2024-12-08 06:24:57.032483] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091372 ] 00:21:07.081 [2024-12-08 06:24:57.101763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.081 [2024-12-08 06:24:57.160432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.338 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.338 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:07.338 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.120wRmBAVN 00:21:07.596 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:07.854 [2024-12-08 06:24:57.771153] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:07.854 nvme0n1 00:21:07.854 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:08.112 Running I/O for 1 seconds... 00:21:09.052 3442.00 IOPS, 13.45 MiB/s 00:21:09.052 Latency(us) 00:21:09.052 [2024-12-08T05:24:59.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.052 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:09.052 Verification LBA range: start 0x0 length 0x2000 00:21:09.052 nvme0n1 : 1.02 3502.76 13.68 0.00 0.00 36205.24 8009.96 36894.34 00:21:09.052 [2024-12-08T05:24:59.171Z] =================================================================================================================== 00:21:09.052 [2024-12-08T05:24:59.171Z] Total : 3502.76 13.68 0.00 0.00 36205.24 8009.96 36894.34 00:21:09.052 { 00:21:09.052 "results": [ 00:21:09.052 { 00:21:09.052 "job": "nvme0n1", 00:21:09.052 "core_mask": "0x2", 00:21:09.052 "workload": "verify", 00:21:09.052 "status": "finished", 00:21:09.052 "verify_range": { 00:21:09.052 "start": 0, 00:21:09.052 "length": 8192 00:21:09.052 }, 00:21:09.052 "queue_depth": 128, 00:21:09.052 "io_size": 4096, 00:21:09.052 "runtime": 1.019197, 00:21:09.052 "iops": 3502.757563061901, 00:21:09.052 "mibps": 13.68264673071055, 00:21:09.052 "io_failed": 0, 00:21:09.052 "io_timeout": 0, 00:21:09.052 "avg_latency_us": 36205.2438767507, 00:21:09.052 "min_latency_us": 8009.955555555555, 00:21:09.052 "max_latency_us": 36894.34074074074 00:21:09.052 } 00:21:09.052 ], 00:21:09.052 "core_count": 1 00:21:09.052 } 00:21:09.052 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1091372 00:21:09.052 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1091372 ']' 00:21:09.052 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1091372 00:21:09.052 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:09.052 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.052 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1091372 00:21:09.052 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:09.052 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:09.052 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1091372' 00:21:09.052 killing process with pid 1091372 00:21:09.052 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1091372 00:21:09.052 Received shutdown signal, test time was about 1.000000 seconds 00:21:09.052 00:21:09.052 Latency(us) 00:21:09.052 [2024-12-08T05:24:59.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.052 [2024-12-08T05:24:59.171Z] =================================================================================================================== 00:21:09.052 [2024-12-08T05:24:59.171Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:09.052 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1091372 00:21:09.311 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1091093 00:21:09.311 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1091093 ']' 00:21:09.311 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1091093 00:21:09.311 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:09.311 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.311 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1091093 00:21:09.311 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.311 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.311 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1091093' 00:21:09.311 killing process with pid 1091093 00:21:09.311 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1091093 00:21:09.311 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1091093 00:21:09.569 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:09.569 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:09.569 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:09.569 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.569 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1091662 00:21:09.569 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:09.569 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1091662 00:21:09.569 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1091662 ']' 00:21:09.569 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.569 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.569 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.569 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.569 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.569 [2024-12-08 06:24:59.597041] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:21:09.569 [2024-12-08 06:24:59.597133] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.569 [2024-12-08 06:24:59.669210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.827 [2024-12-08 06:24:59.722526] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.827 [2024-12-08 06:24:59.722592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.828 [2024-12-08 06:24:59.722614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.828 [2024-12-08 06:24:59.722626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.828 [2024-12-08 06:24:59.722635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.828 [2024-12-08 06:24:59.723306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.828 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.828 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:09.828 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:09.828 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:09.828 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.828 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.828 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:09.828 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.828 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.828 [2024-12-08 06:24:59.886415] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.828 malloc0 00:21:09.828 [2024-12-08 06:24:59.916806] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:09.828 [2024-12-08 06:24:59.917085] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.828 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.828 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1091687 00:21:09.828 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:09.828 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1091687 /var/tmp/bdevperf.sock 00:21:09.828 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1091687 ']' 00:21:09.828 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.828 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.828 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.828 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.086 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.086 [2024-12-08 06:24:59.987627] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:21:10.086 [2024-12-08 06:24:59.987702] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091687 ] 00:21:10.086 [2024-12-08 06:25:00.058920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.086 [2024-12-08 06:25:00.119335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.346 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.346 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:10.346 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.120wRmBAVN 00:21:10.604 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:10.864 [2024-12-08 06:25:00.765551] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:10.864 nvme0n1 00:21:10.864 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:10.864 Running I/O for 1 seconds... 00:21:12.243 3321.00 IOPS, 12.97 MiB/s 00:21:12.243 Latency(us) 00:21:12.243 [2024-12-08T05:25:02.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.243 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:12.243 Verification LBA range: start 0x0 length 0x2000 00:21:12.243 nvme0n1 : 1.03 3364.17 13.14 0.00 0.00 37624.77 8009.96 53593.88 00:21:12.243 [2024-12-08T05:25:02.362Z] =================================================================================================================== 00:21:12.243 [2024-12-08T05:25:02.362Z] Total : 3364.17 13.14 0.00 0.00 37624.77 8009.96 53593.88 00:21:12.243 { 00:21:12.243 "results": [ 00:21:12.243 { 00:21:12.243 "job": "nvme0n1", 00:21:12.243 "core_mask": "0x2", 00:21:12.243 "workload": "verify", 00:21:12.243 "status": "finished", 00:21:12.243 "verify_range": { 00:21:12.243 "start": 0, 00:21:12.243 "length": 8192 00:21:12.243 }, 00:21:12.243 "queue_depth": 128, 00:21:12.243 "io_size": 4096, 00:21:12.243 "runtime": 1.025512, 00:21:12.243 "iops": 3364.1732129901943, 00:21:12.243 "mibps": 13.141301613242947, 00:21:12.243 "io_failed": 0, 00:21:12.243 "io_timeout": 0, 00:21:12.243 "avg_latency_us": 37624.76988942566, 00:21:12.243 "min_latency_us": 8009.955555555555, 00:21:12.243 "max_latency_us": 53593.88444444445 00:21:12.243 } 00:21:12.243 ], 00:21:12.243 "core_count": 1 00:21:12.243 } 00:21:12.243 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:12.243 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.243 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.243 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.243 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:12.243 "subsystems": [ 00:21:12.243 { 00:21:12.243 "subsystem": "keyring", 00:21:12.243 "config": [ 00:21:12.243 { 00:21:12.243 "method": "keyring_file_add_key", 00:21:12.243 "params": { 00:21:12.243 "name": "key0", 00:21:12.243 "path": "/tmp/tmp.120wRmBAVN" 00:21:12.243 } 00:21:12.243 } 00:21:12.243 ] 00:21:12.243 }, 00:21:12.243 { 00:21:12.243 "subsystem": "iobuf", 00:21:12.243 "config": [ 00:21:12.243 { 00:21:12.243 "method": "iobuf_set_options", 00:21:12.243 "params": { 00:21:12.243 "small_pool_count": 8192, 00:21:12.243 "large_pool_count": 1024, 00:21:12.243 "small_bufsize": 8192, 00:21:12.243 "large_bufsize": 135168, 00:21:12.243 "enable_numa": false 00:21:12.243 } 00:21:12.243 } 00:21:12.243 ] 00:21:12.243 }, 00:21:12.243 { 00:21:12.243 "subsystem": "sock", 00:21:12.243 "config": [ 00:21:12.243 { 00:21:12.243 "method": "sock_set_default_impl", 00:21:12.243 "params": { 00:21:12.243 "impl_name": "posix" 00:21:12.243 } 00:21:12.243 }, 00:21:12.243 { 00:21:12.243 "method": "sock_impl_set_options", 00:21:12.243 "params": { 00:21:12.243 "impl_name": "ssl", 00:21:12.243 "recv_buf_size": 4096, 00:21:12.243 "send_buf_size": 4096, 00:21:12.243 "enable_recv_pipe": true, 00:21:12.243 "enable_quickack": false, 00:21:12.243 "enable_placement_id": 0, 00:21:12.243 "enable_zerocopy_send_server": true, 00:21:12.243 "enable_zerocopy_send_client": false, 00:21:12.243 "zerocopy_threshold": 0, 00:21:12.243 "tls_version": 0, 00:21:12.243 "enable_ktls": false 00:21:12.243 } 00:21:12.243 }, 00:21:12.243 { 00:21:12.243 "method": "sock_impl_set_options", 00:21:12.243 "params": { 00:21:12.243 "impl_name": "posix", 00:21:12.243 "recv_buf_size": 2097152, 00:21:12.243 "send_buf_size": 2097152, 00:21:12.243 "enable_recv_pipe": true, 00:21:12.243 "enable_quickack": false, 00:21:12.243 "enable_placement_id": 0, 00:21:12.243 "enable_zerocopy_send_server": true, 00:21:12.243 "enable_zerocopy_send_client": false, 00:21:12.243 "zerocopy_threshold": 0, 00:21:12.243 "tls_version": 0, 00:21:12.243 "enable_ktls": false 00:21:12.243 } 00:21:12.243 } 00:21:12.243 ] 00:21:12.243 }, 00:21:12.243 { 00:21:12.243 "subsystem": "vmd", 00:21:12.243 "config": [] 00:21:12.243 }, 00:21:12.243 { 00:21:12.243 "subsystem": "accel", 00:21:12.243 "config": [ 00:21:12.243 { 00:21:12.243 "method": "accel_set_options", 00:21:12.243 "params": { 00:21:12.243 "small_cache_size": 128, 00:21:12.243 "large_cache_size": 16, 00:21:12.243 "task_count": 2048, 00:21:12.243 "sequence_count": 2048, 00:21:12.243 "buf_count": 2048 00:21:12.243 } 00:21:12.243 } 00:21:12.243 ] 00:21:12.243 }, 00:21:12.243 { 00:21:12.243 "subsystem": "bdev", 00:21:12.243 "config": [ 00:21:12.243 { 00:21:12.243 "method": "bdev_set_options", 00:21:12.243 "params": { 00:21:12.243 "bdev_io_pool_size": 65535, 00:21:12.243 "bdev_io_cache_size": 256, 00:21:12.243 "bdev_auto_examine": true, 00:21:12.243 "iobuf_small_cache_size": 128, 00:21:12.243 "iobuf_large_cache_size": 16 00:21:12.243 } 00:21:12.243 }, 00:21:12.243 { 00:21:12.243 "method": "bdev_raid_set_options", 00:21:12.243 "params": { 00:21:12.243 "process_window_size_kb": 1024, 00:21:12.243 "process_max_bandwidth_mb_sec": 0 00:21:12.243 } 00:21:12.243 }, 00:21:12.243 { 00:21:12.243 "method": "bdev_iscsi_set_options", 00:21:12.243 "params": { 00:21:12.243 "timeout_sec": 30 00:21:12.243 } 00:21:12.243 }, 00:21:12.243 { 00:21:12.243 "method": "bdev_nvme_set_options", 00:21:12.243 "params": { 00:21:12.243 "action_on_timeout": "none", 00:21:12.243 "timeout_us": 0, 00:21:12.243 "timeout_admin_us": 0, 00:21:12.243 "keep_alive_timeout_ms": 10000, 00:21:12.243 "arbitration_burst": 0, 00:21:12.243 "low_priority_weight": 0, 00:21:12.243 "medium_priority_weight": 0, 00:21:12.243 "high_priority_weight": 0, 00:21:12.243 "nvme_adminq_poll_period_us": 10000, 00:21:12.243 "nvme_ioq_poll_period_us": 0, 00:21:12.243 "io_queue_requests": 0, 00:21:12.243 "delay_cmd_submit": true, 00:21:12.243 "transport_retry_count": 4, 00:21:12.243 "bdev_retry_count": 3, 00:21:12.243 "transport_ack_timeout": 0, 00:21:12.243 "ctrlr_loss_timeout_sec": 0, 00:21:12.243 "reconnect_delay_sec": 0, 00:21:12.243 "fast_io_fail_timeout_sec": 0, 00:21:12.243 "disable_auto_failback": false, 00:21:12.243 "generate_uuids": false, 00:21:12.243 "transport_tos": 0, 00:21:12.243 "nvme_error_stat": false, 00:21:12.243 "rdma_srq_size": 0, 00:21:12.243 "io_path_stat": false, 00:21:12.243 "allow_accel_sequence": false, 00:21:12.243 "rdma_max_cq_size": 0, 00:21:12.243 "rdma_cm_event_timeout_ms": 0, 00:21:12.243 "dhchap_digests": [ 00:21:12.243 "sha256", 00:21:12.244 "sha384", 00:21:12.244 "sha512" 00:21:12.244 ], 00:21:12.244 "dhchap_dhgroups": [ 00:21:12.244 "null", 00:21:12.244 "ffdhe2048", 00:21:12.244 "ffdhe3072", 00:21:12.244 "ffdhe4096", 00:21:12.244 "ffdhe6144", 00:21:12.244 "ffdhe8192" 00:21:12.244 ] 00:21:12.244 } 00:21:12.244 }, 00:21:12.244 { 00:21:12.244 "method": "bdev_nvme_set_hotplug", 00:21:12.244 "params": { 00:21:12.244 "period_us": 100000, 00:21:12.244 "enable": false 00:21:12.244 } 00:21:12.244 }, 00:21:12.244 { 00:21:12.244 "method": "bdev_malloc_create", 00:21:12.244 "params": { 00:21:12.244 "name": "malloc0", 00:21:12.244 "num_blocks": 8192, 00:21:12.244 "block_size": 4096, 00:21:12.244 "physical_block_size": 4096, 00:21:12.244 "uuid": "621d48b1-766a-49a2-830e-ed262f3e0619", 00:21:12.244 "optimal_io_boundary": 0, 00:21:12.244 "md_size": 0, 00:21:12.244 "dif_type": 0, 00:21:12.244 "dif_is_head_of_md": false, 00:21:12.244 "dif_pi_format": 0 00:21:12.244 } 00:21:12.244 }, 00:21:12.244 { 00:21:12.244 "method": "bdev_wait_for_examine" 00:21:12.244 } 00:21:12.244 ] 00:21:12.244 }, 00:21:12.244 { 00:21:12.244 "subsystem": "nbd", 00:21:12.244 "config": [] 00:21:12.244 }, 00:21:12.244 { 00:21:12.244 "subsystem": "scheduler", 00:21:12.244 "config": [ 00:21:12.244 { 00:21:12.244 "method": "framework_set_scheduler", 00:21:12.244 "params": { 00:21:12.244 "name": "static" 00:21:12.244 } 00:21:12.244 } 00:21:12.244 ] 00:21:12.244 }, 00:21:12.244 { 00:21:12.244 "subsystem": "nvmf", 00:21:12.244 "config": [ 00:21:12.244 { 00:21:12.244 "method": "nvmf_set_config", 00:21:12.244 "params": { 00:21:12.244 "discovery_filter": "match_any", 00:21:12.244 "admin_cmd_passthru": { 00:21:12.244 "identify_ctrlr": false 00:21:12.244 }, 00:21:12.244 "dhchap_digests": [ 00:21:12.244 "sha256", 00:21:12.244 "sha384", 00:21:12.244 "sha512" 00:21:12.244 ], 00:21:12.244 "dhchap_dhgroups": [ 00:21:12.244 "null", 00:21:12.244 "ffdhe2048", 00:21:12.244 "ffdhe3072", 00:21:12.244 "ffdhe4096", 00:21:12.244 "ffdhe6144", 00:21:12.244 "ffdhe8192" 00:21:12.244 ] 00:21:12.244 } 00:21:12.244 }, 00:21:12.244 { 00:21:12.244 "method": "nvmf_set_max_subsystems", 00:21:12.244 "params": { 00:21:12.244 "max_subsystems": 1024 00:21:12.244 } 00:21:12.244 }, 00:21:12.244 { 00:21:12.244 "method": "nvmf_set_crdt", 00:21:12.244 "params": { 00:21:12.244 "crdt1": 0, 00:21:12.244 "crdt2": 0, 00:21:12.244 "crdt3": 0 00:21:12.244 } 00:21:12.244 }, 00:21:12.244 { 00:21:12.244 "method": "nvmf_create_transport", 00:21:12.244 "params": { 00:21:12.244 "trtype": "TCP", 00:21:12.244 "max_queue_depth": 128, 00:21:12.244 "max_io_qpairs_per_ctrlr": 127, 00:21:12.244 "in_capsule_data_size": 4096, 00:21:12.244 "max_io_size": 131072, 00:21:12.244 "io_unit_size": 131072, 00:21:12.244 "max_aq_depth": 128, 00:21:12.244 "num_shared_buffers": 511, 00:21:12.244 "buf_cache_size": 4294967295, 00:21:12.244 "dif_insert_or_strip": false, 00:21:12.244 "zcopy": false, 00:21:12.244 "c2h_success": false, 00:21:12.244 "sock_priority": 0, 00:21:12.244 "abort_timeout_sec": 1, 00:21:12.244 "ack_timeout": 0, 00:21:12.244 "data_wr_pool_size": 0 00:21:12.244 } 00:21:12.244 }, 00:21:12.244 { 00:21:12.244 "method": "nvmf_create_subsystem", 00:21:12.244 "params": { 00:21:12.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.244 "allow_any_host": false, 00:21:12.244 "serial_number": "00000000000000000000", 00:21:12.244 "model_number": "SPDK bdev Controller", 00:21:12.244 "max_namespaces": 32, 00:21:12.244 "min_cntlid": 1, 00:21:12.244 "max_cntlid": 65519, 00:21:12.244 "ana_reporting": false 00:21:12.244 } 00:21:12.244 }, 00:21:12.244 { 00:21:12.244 "method": "nvmf_subsystem_add_host", 00:21:12.244 "params": { 00:21:12.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.244 "host": "nqn.2016-06.io.spdk:host1", 00:21:12.244 "psk": "key0" 00:21:12.244 } 00:21:12.244 }, 00:21:12.244 { 00:21:12.244 "method": "nvmf_subsystem_add_ns", 00:21:12.244 "params": { 00:21:12.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.244 "namespace": { 00:21:12.244 "nsid": 1, 00:21:12.244 "bdev_name": "malloc0", 00:21:12.244 "nguid": "621D48B1766A49A2830EED262F3E0619", 00:21:12.244 "uuid": "621d48b1-766a-49a2-830e-ed262f3e0619", 00:21:12.244 "no_auto_visible": false 00:21:12.244 } 00:21:12.244 } 00:21:12.244 }, 00:21:12.244 { 00:21:12.244 "method": "nvmf_subsystem_add_listener", 00:21:12.244 "params": { 00:21:12.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.244 "listen_address": { 00:21:12.244 "trtype": "TCP", 00:21:12.244 "adrfam": "IPv4", 00:21:12.244 "traddr": "10.0.0.2", 00:21:12.244 "trsvcid": "4420" 00:21:12.244 }, 00:21:12.244 "secure_channel": false, 00:21:12.244 "sock_impl": "ssl" 00:21:12.244 } 00:21:12.244 } 00:21:12.244 ] 00:21:12.244 } 00:21:12.244 ] 00:21:12.244 }' 00:21:12.244 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:12.502 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:12.502 "subsystems": [ 00:21:12.502 { 00:21:12.502 "subsystem": "keyring", 00:21:12.502 "config": [ 00:21:12.502 { 00:21:12.502 "method": "keyring_file_add_key", 00:21:12.502 "params": { 00:21:12.502 "name": "key0", 00:21:12.502 "path": "/tmp/tmp.120wRmBAVN" 00:21:12.502 } 00:21:12.502 } 00:21:12.502 ] 00:21:12.502 }, 00:21:12.502 { 00:21:12.502 "subsystem": "iobuf", 00:21:12.502 "config": [ 00:21:12.502 { 00:21:12.502 "method": "iobuf_set_options", 00:21:12.502 "params": { 00:21:12.502 "small_pool_count": 8192, 00:21:12.502 "large_pool_count": 1024, 00:21:12.502 "small_bufsize": 8192, 00:21:12.502 "large_bufsize": 135168, 00:21:12.502 "enable_numa": false 00:21:12.502 } 00:21:12.502 } 00:21:12.502 ] 00:21:12.502 }, 00:21:12.502 { 00:21:12.502 "subsystem": "sock", 00:21:12.502 "config": [ 00:21:12.502 { 00:21:12.502 "method": "sock_set_default_impl", 00:21:12.502 "params": { 00:21:12.502 "impl_name": "posix" 00:21:12.502 } 00:21:12.502 }, 00:21:12.502 { 00:21:12.502 "method": "sock_impl_set_options", 00:21:12.502 "params": { 00:21:12.502 "impl_name": "ssl", 00:21:12.502 "recv_buf_size": 4096, 00:21:12.502 "send_buf_size": 4096, 00:21:12.502 "enable_recv_pipe": true, 00:21:12.502 "enable_quickack": false, 00:21:12.502 "enable_placement_id": 0, 00:21:12.502 "enable_zerocopy_send_server": true, 00:21:12.502 "enable_zerocopy_send_client": false, 00:21:12.502 "zerocopy_threshold": 0, 00:21:12.502 "tls_version": 0, 00:21:12.502 "enable_ktls": false 00:21:12.502 } 00:21:12.503 }, 00:21:12.503 { 00:21:12.503 "method": "sock_impl_set_options", 00:21:12.503 "params": { 00:21:12.503 "impl_name": "posix", 00:21:12.503 "recv_buf_size": 2097152, 00:21:12.503 "send_buf_size": 2097152, 00:21:12.503 "enable_recv_pipe": true, 00:21:12.503 "enable_quickack": false, 00:21:12.503 "enable_placement_id": 0, 00:21:12.503 "enable_zerocopy_send_server": true, 00:21:12.503 "enable_zerocopy_send_client": false, 00:21:12.503 "zerocopy_threshold": 0, 00:21:12.503 "tls_version": 0, 00:21:12.503 "enable_ktls": false 00:21:12.503 } 00:21:12.503 } 00:21:12.503 ] 00:21:12.503 }, 00:21:12.503 { 00:21:12.503 "subsystem": "vmd", 00:21:12.503 "config": [] 00:21:12.503 }, 00:21:12.503 { 00:21:12.503 "subsystem": "accel", 00:21:12.503 "config": [ 00:21:12.503 { 00:21:12.503 "method": "accel_set_options", 00:21:12.503 "params": { 00:21:12.503 "small_cache_size": 128, 00:21:12.503 "large_cache_size": 16, 00:21:12.503 "task_count": 2048, 00:21:12.503 "sequence_count": 2048, 00:21:12.503 "buf_count": 2048 00:21:12.503 } 00:21:12.503 } 00:21:12.503 ] 00:21:12.503 }, 00:21:12.503 { 00:21:12.503 "subsystem": "bdev", 00:21:12.503 "config": [ 00:21:12.503 { 00:21:12.503 "method": "bdev_set_options", 00:21:12.503 "params": { 00:21:12.503 "bdev_io_pool_size": 65535, 00:21:12.503 "bdev_io_cache_size": 256, 00:21:12.503 "bdev_auto_examine": true, 00:21:12.503 "iobuf_small_cache_size": 128, 00:21:12.503 "iobuf_large_cache_size": 16 00:21:12.503 } 00:21:12.503 }, 00:21:12.503 { 00:21:12.503 "method": "bdev_raid_set_options", 00:21:12.503 "params": { 00:21:12.503 "process_window_size_kb": 1024, 00:21:12.503 "process_max_bandwidth_mb_sec": 0 00:21:12.503 } 00:21:12.503 }, 00:21:12.503 { 00:21:12.503 "method": "bdev_iscsi_set_options", 00:21:12.503 "params": { 00:21:12.503 "timeout_sec": 30 00:21:12.503 } 00:21:12.503 }, 00:21:12.503 { 00:21:12.503 "method": "bdev_nvme_set_options", 00:21:12.503 "params": { 00:21:12.503 "action_on_timeout": "none", 00:21:12.503 "timeout_us": 0, 00:21:12.503 "timeout_admin_us": 0, 00:21:12.503 "keep_alive_timeout_ms": 10000, 00:21:12.503 "arbitration_burst": 0, 00:21:12.503 "low_priority_weight": 0, 00:21:12.503 "medium_priority_weight": 0, 00:21:12.503 "high_priority_weight": 0, 00:21:12.503 "nvme_adminq_poll_period_us": 10000, 00:21:12.503 "nvme_ioq_poll_period_us": 0, 00:21:12.503 "io_queue_requests": 512, 00:21:12.503 "delay_cmd_submit": true, 00:21:12.503 "transport_retry_count": 4, 00:21:12.503 "bdev_retry_count": 3, 00:21:12.503 "transport_ack_timeout": 0, 00:21:12.503 "ctrlr_loss_timeout_sec": 0, 00:21:12.503 "reconnect_delay_sec": 0, 00:21:12.503 "fast_io_fail_timeout_sec": 0, 00:21:12.503 "disable_auto_failback": false, 00:21:12.503 "generate_uuids": false, 00:21:12.503 "transport_tos": 0, 00:21:12.503 "nvme_error_stat": false, 00:21:12.503 "rdma_srq_size": 0, 00:21:12.503 "io_path_stat": false, 00:21:12.503 "allow_accel_sequence": false, 00:21:12.503 "rdma_max_cq_size": 0, 00:21:12.503 "rdma_cm_event_timeout_ms": 0, 00:21:12.503 "dhchap_digests": [ 00:21:12.503 "sha256", 00:21:12.503 "sha384", 00:21:12.503 "sha512" 00:21:12.503 ], 00:21:12.503 "dhchap_dhgroups": [ 00:21:12.503 "null", 00:21:12.503 "ffdhe2048", 00:21:12.503 "ffdhe3072", 00:21:12.503 "ffdhe4096", 00:21:12.503 "ffdhe6144", 00:21:12.503 "ffdhe8192" 00:21:12.503 ] 00:21:12.503 } 00:21:12.503 }, 00:21:12.503 { 00:21:12.503 "method": "bdev_nvme_attach_controller", 00:21:12.503 "params": { 00:21:12.503 "name": "nvme0", 00:21:12.503 "trtype": "TCP", 00:21:12.503 "adrfam": "IPv4", 00:21:12.503 "traddr": "10.0.0.2", 00:21:12.503 "trsvcid": "4420", 00:21:12.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.503 "prchk_reftag": false, 00:21:12.503 "prchk_guard": false, 00:21:12.503 "ctrlr_loss_timeout_sec": 0, 00:21:12.503 "reconnect_delay_sec": 0, 00:21:12.503 "fast_io_fail_timeout_sec": 0, 00:21:12.503 "psk": "key0", 00:21:12.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:12.503 "hdgst": false, 00:21:12.503 "ddgst": false, 00:21:12.503 "multipath": "multipath" 00:21:12.503 } 00:21:12.503 }, 00:21:12.503 { 00:21:12.503 "method": "bdev_nvme_set_hotplug", 00:21:12.503 "params": { 00:21:12.503 "period_us": 100000, 00:21:12.503 "enable": false 00:21:12.503 } 00:21:12.503 }, 00:21:12.503 { 00:21:12.503 "method": "bdev_enable_histogram", 00:21:12.503 "params": { 00:21:12.503 "name": "nvme0n1", 00:21:12.503 "enable": true 00:21:12.503 } 00:21:12.503 }, 00:21:12.503 { 00:21:12.503 "method": "bdev_wait_for_examine" 00:21:12.503 } 00:21:12.503 ] 00:21:12.503 }, 00:21:12.503 { 00:21:12.503 "subsystem": "nbd", 00:21:12.503 "config": [] 00:21:12.503 } 00:21:12.503 ] 00:21:12.503 }' 00:21:12.503 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1091687 00:21:12.503 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1091687 ']' 00:21:12.503 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1091687 00:21:12.503 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:12.503 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.503 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1091687 00:21:12.503 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:12.503 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:12.503 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1091687' 00:21:12.503 killing process with pid 1091687 00:21:12.503 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1091687 00:21:12.503 Received shutdown signal, test time was about 1.000000 seconds 00:21:12.503 00:21:12.503 Latency(us) 00:21:12.503 [2024-12-08T05:25:02.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.503 [2024-12-08T05:25:02.622Z] =================================================================================================================== 00:21:12.503 [2024-12-08T05:25:02.622Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.503 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1091687 00:21:12.761 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1091662 00:21:12.761 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1091662 ']' 00:21:12.761 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1091662 00:21:12.761 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:12.761 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.761 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1091662 00:21:12.761 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:12.761 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:12.762 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1091662' 00:21:12.762 killing process with pid 1091662 00:21:12.762 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1091662 00:21:12.762 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1091662 00:21:13.020 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:13.020 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:13.020 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:13.020 "subsystems": [ 00:21:13.020 { 00:21:13.020 "subsystem": "keyring", 00:21:13.020 "config": [ 00:21:13.020 { 00:21:13.020 "method": "keyring_file_add_key", 00:21:13.020 "params": { 00:21:13.020 "name": "key0", 00:21:13.020 "path": "/tmp/tmp.120wRmBAVN" 00:21:13.020 } 00:21:13.020 } 00:21:13.020 ] 00:21:13.020 }, 00:21:13.020 { 00:21:13.020 "subsystem": "iobuf", 00:21:13.020 "config": [ 00:21:13.020 { 00:21:13.020 "method": "iobuf_set_options", 00:21:13.020 "params": { 00:21:13.020 "small_pool_count": 8192, 00:21:13.020 "large_pool_count": 1024, 00:21:13.020 "small_bufsize": 8192, 00:21:13.020 "large_bufsize": 135168, 00:21:13.020 "enable_numa": false 00:21:13.020 } 00:21:13.020 } 00:21:13.020 ] 00:21:13.020 }, 00:21:13.020 { 00:21:13.020 "subsystem": "sock", 00:21:13.020 "config": [ 00:21:13.020 { 00:21:13.020 "method": "sock_set_default_impl", 00:21:13.020 "params": { 00:21:13.020 "impl_name": "posix" 00:21:13.020 } 00:21:13.020 }, 00:21:13.020 { 00:21:13.020 "method": "sock_impl_set_options", 00:21:13.020 "params": { 00:21:13.020 "impl_name": "ssl", 00:21:13.020 "recv_buf_size": 4096, 00:21:13.020 "send_buf_size": 4096, 00:21:13.020 "enable_recv_pipe": true, 00:21:13.020 "enable_quickack": false, 00:21:13.020 "enable_placement_id": 0, 00:21:13.020 "enable_zerocopy_send_server": true, 00:21:13.020 "enable_zerocopy_send_client": false, 00:21:13.020 "zerocopy_threshold": 0, 00:21:13.020 "tls_version": 0, 00:21:13.020 "enable_ktls": false 00:21:13.020 } 00:21:13.020 }, 00:21:13.020 { 00:21:13.020 "method": "sock_impl_set_options", 00:21:13.020 "params": { 00:21:13.020 "impl_name": "posix", 00:21:13.020 "recv_buf_size": 2097152, 00:21:13.020 "send_buf_size": 2097152, 00:21:13.020 "enable_recv_pipe": true, 00:21:13.020 "enable_quickack": false, 00:21:13.020 "enable_placement_id": 0, 00:21:13.020 "enable_zerocopy_send_server": true, 00:21:13.020 "enable_zerocopy_send_client": false, 00:21:13.020 "zerocopy_threshold": 0, 00:21:13.020 "tls_version": 0, 00:21:13.020 "enable_ktls": false 00:21:13.020 } 00:21:13.020 } 00:21:13.020 ] 00:21:13.020 }, 00:21:13.020 { 00:21:13.020 "subsystem": "vmd", 00:21:13.020 "config": [] 00:21:13.020 }, 00:21:13.020 { 00:21:13.020 "subsystem": "accel", 00:21:13.020 "config": [ 00:21:13.020 { 00:21:13.020 "method": "accel_set_options", 00:21:13.020 "params": { 00:21:13.020 "small_cache_size": 128, 00:21:13.020 "large_cache_size": 16, 00:21:13.020 "task_count": 2048, 00:21:13.020 "sequence_count": 2048, 00:21:13.020 "buf_count": 2048 00:21:13.020 } 00:21:13.020 } 00:21:13.020 ] 00:21:13.020 }, 00:21:13.020 { 00:21:13.020 "subsystem": "bdev", 00:21:13.020 "config": [ 00:21:13.020 { 00:21:13.020 "method": "bdev_set_options", 00:21:13.020 "params": { 00:21:13.020 "bdev_io_pool_size": 65535, 00:21:13.020 "bdev_io_cache_size": 256, 00:21:13.020 "bdev_auto_examine": true, 00:21:13.020 "iobuf_small_cache_size": 128, 00:21:13.020 "iobuf_large_cache_size": 16 00:21:13.020 } 00:21:13.020 }, 00:21:13.020 { 00:21:13.020 "method": "bdev_raid_set_options", 00:21:13.020 "params": { 00:21:13.020 "process_window_size_kb": 1024, 00:21:13.020 "process_max_bandwidth_mb_sec": 0 00:21:13.020 } 00:21:13.020 }, 00:21:13.020 { 00:21:13.020 "method": "bdev_iscsi_set_options", 00:21:13.020 "params": { 00:21:13.020 "timeout_sec": 30 00:21:13.020 } 00:21:13.020 }, 00:21:13.020 { 00:21:13.020 "method": "bdev_nvme_set_options", 00:21:13.020 "params": { 00:21:13.020 "action_on_timeout": "none", 00:21:13.020 "timeout_us": 0, 00:21:13.020 "timeout_admin_us": 0, 00:21:13.020 "keep_alive_timeout_ms": 10000, 00:21:13.020 "arbitration_burst": 0, 00:21:13.020 "low_priority_weight": 0, 00:21:13.020 "medium_priority_weight": 0, 00:21:13.020 "high_priority_weight": 0, 00:21:13.020 "nvme_adminq_poll_period_us": 10000, 00:21:13.020 "nvme_ioq_poll_period_us": 0, 00:21:13.020 "io_queue_requests": 0, 00:21:13.020 "delay_cmd_submit": true, 00:21:13.020 "transport_retry_count": 4, 00:21:13.020 "bdev_retry_count": 3, 00:21:13.020 "transport_ack_timeout": 0, 00:21:13.020 "ctrlr_loss_timeout_sec": 0, 00:21:13.020 "reconnect_delay_sec": 0, 00:21:13.020 "fast_io_fail_timeout_sec": 0, 00:21:13.020 "disable_auto_failback": false, 00:21:13.020 "generate_uuids": false, 00:21:13.020 "transport_tos": 0, 00:21:13.020 "nvme_error_stat": false, 00:21:13.020 "rdma_srq_size": 0, 00:21:13.020 "io_path_stat": false, 00:21:13.020 "allow_accel_sequence": false, 00:21:13.020 "rdma_max_cq_size": 0, 00:21:13.020 "rdma_cm_event_timeout_ms": 0, 00:21:13.020 "dhchap_digests": [ 00:21:13.020 "sha256", 00:21:13.020 "sha384", 00:21:13.020 "sha512" 00:21:13.020 ], 00:21:13.020 "dhchap_dhgroups": [ 00:21:13.020 "null", 00:21:13.020 "ffdhe2048", 00:21:13.020 "ffdhe3072", 00:21:13.020 "ffdhe4096", 00:21:13.020 "ffdhe6144", 00:21:13.020 "ffdhe8192" 00:21:13.020 ] 00:21:13.020 } 00:21:13.020 }, 00:21:13.020 { 00:21:13.020 "method": "bdev_nvme_set_hotplug", 00:21:13.020 "params": { 00:21:13.020 "period_us": 100000, 00:21:13.020 "enable": false 00:21:13.020 } 00:21:13.020 }, 00:21:13.020 { 00:21:13.020 "method": "bdev_malloc_create", 00:21:13.020 "params": { 00:21:13.020 "name": "malloc0", 00:21:13.020 "num_blocks": 8192, 00:21:13.020 "block_size": 4096, 00:21:13.020 "physical_block_size": 4096, 00:21:13.020 "uuid": "621d48b1-766a-49a2-830e-ed262f3e0619", 00:21:13.020 "optimal_io_boundary": 0, 00:21:13.020 "md_size": 0, 00:21:13.020 "dif_type": 0, 00:21:13.020 "dif_is_head_of_md": false, 00:21:13.020 "dif_pi_format": 0 00:21:13.020 } 00:21:13.020 }, 00:21:13.020 { 00:21:13.020 "method": "bdev_wait_for_examine" 00:21:13.020 } 00:21:13.020 ] 00:21:13.020 }, 00:21:13.020 { 00:21:13.020 "subsystem": "nbd", 00:21:13.020 "config": [] 00:21:13.020 }, 00:21:13.020 { 00:21:13.020 "subsystem": "scheduler", 00:21:13.020 "config": [ 00:21:13.020 { 00:21:13.020 "method": "framework_set_scheduler", 00:21:13.020 "params": { 00:21:13.020 "name": "static" 00:21:13.020 } 00:21:13.020 } 00:21:13.020 ] 00:21:13.021 }, 00:21:13.021 { 00:21:13.021 "subsystem": "nvmf", 00:21:13.021 "config": [ 00:21:13.021 { 00:21:13.021 "method": "nvmf_set_config", 00:21:13.021 "params": { 00:21:13.021 "discovery_filter": "match_any", 00:21:13.021 "admin_cmd_passthru": { 00:21:13.021 "identify_ctrlr": false 00:21:13.021 }, 00:21:13.021 "dhchap_digests": [ 00:21:13.021 "sha256", 00:21:13.021 "sha384", 00:21:13.021 "sha512" 00:21:13.021 ], 00:21:13.021 "dhchap_dhgroups": [ 00:21:13.021 "null", 00:21:13.021 "ffdhe2048", 00:21:13.021 "ffdhe3072", 00:21:13.021 "ffdhe4096", 00:21:13.021 "ffdhe6144", 00:21:13.021 "ffdhe8192" 00:21:13.021 ] 00:21:13.021 } 00:21:13.021 }, 00:21:13.021 { 00:21:13.021 "method": "nvmf_set_max_subsystems", 00:21:13.021 "params": { 00:21:13.021 "max_subsystems": 1024 00:21:13.021 } 00:21:13.021 }, 00:21:13.021 { 00:21:13.021 "method": "nvmf_set_crdt", 00:21:13.021 "params": { 00:21:13.021 "crdt1": 0, 00:21:13.021 "crdt2": 0, 00:21:13.021 "crdt3": 0 00:21:13.021 } 00:21:13.021 }, 00:21:13.021 { 00:21:13.021 "method": "nvmf_create_transport", 00:21:13.021 "params": { 00:21:13.021 "trtype": "TCP", 00:21:13.021 "max_queue_depth": 128, 00:21:13.021 "max_io_qpairs_per_ctrlr": 127, 00:21:13.021 "in_capsule_data_size": 4096, 00:21:13.021 "max_io_size": 131072, 00:21:13.021 "io_unit_size": 131072, 00:21:13.021 "max_aq_depth": 128, 00:21:13.021 "num_shared_buffers": 511, 00:21:13.021 "buf_cache_size": 4294967295, 00:21:13.021 "dif_insert_or_strip": false, 00:21:13.021 "zcopy": false, 00:21:13.021 "c2h_success": false, 00:21:13.021 "sock_priority": 0, 00:21:13.021 "abort_timeout_sec": 1, 00:21:13.021 "ack_timeout": 0, 00:21:13.021 "data_wr_pool_size": 0 00:21:13.021 } 00:21:13.021 }, 00:21:13.021 { 00:21:13.021 "method": "nvmf_create_subsystem", 00:21:13.021 "params": { 00:21:13.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.021 "allow_any_host": false, 00:21:13.021 "serial_number": "00000000000000000000", 00:21:13.021 "model_number": "SPDK bdev Controller", 00:21:13.021 "max_namespaces": 32, 00:21:13.021 "min_cntlid": 1, 00:21:13.021 "max_cntlid": 65519, 00:21:13.021 "ana_reporting": false 00:21:13.021 } 00:21:13.021 }, 00:21:13.021 { 00:21:13.021 "method": "nvmf_subsystem_add_host", 00:21:13.021 "params": { 00:21:13.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.021 "host": "nqn.2016-06.io.spdk:host1", 00:21:13.021 "psk": "key0" 00:21:13.021 } 00:21:13.021 }, 00:21:13.021 { 00:21:13.021 "method": "nvmf_subsystem_add_ns", 00:21:13.021 "params": { 00:21:13.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.021 "namespace": { 00:21:13.021 "nsid": 1, 00:21:13.021 "bdev_name": "malloc0", 00:21:13.021 "nguid": "621D48B1766A49A2830EED262F3E0619", 00:21:13.021 "uuid": "621d48b1-766a-49a2-830e-ed262f3e0619", 00:21:13.021 "no_auto_visible": false 00:21:13.021 } 00:21:13.021 } 00:21:13.021 }, 00:21:13.021 { 00:21:13.021 "method": "nvmf_subsystem_add_listener", 00:21:13.021 "params": { 00:21:13.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.021 "listen_address": { 00:21:13.021 "trtype": "TCP", 00:21:13.021 "adrfam": "IPv4", 00:21:13.021 "traddr": "10.0.0.2", 00:21:13.021 "trsvcid": "4420" 00:21:13.021 }, 00:21:13.021 "secure_channel": false, 00:21:13.021 "sock_impl": "ssl" 00:21:13.021 } 00:21:13.021 } 00:21:13.021 ] 00:21:13.021 } 00:21:13.021 ] 00:21:13.021 }' 00:21:13.021 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.021 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.021 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1092097 00:21:13.021 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:13.021 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1092097 00:21:13.021 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1092097 ']' 00:21:13.021 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.021 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.021 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.021 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.021 06:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.021 [2024-12-08 06:25:03.081994] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:21:13.021 [2024-12-08 06:25:03.082078] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.280 [2024-12-08 06:25:03.152870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.280 [2024-12-08 06:25:03.209190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.280 [2024-12-08 06:25:03.209265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.280 [2024-12-08 06:25:03.209279] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.280 [2024-12-08 06:25:03.209291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.280 [2024-12-08 06:25:03.209300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.280 [2024-12-08 06:25:03.209964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.581 [2024-12-08 06:25:03.459278] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.581 [2024-12-08 06:25:03.491307] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:13.581 [2024-12-08 06:25:03.491598] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.172 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.172 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:14.172 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:14.172 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:14.172 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.172 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.172 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1092250 00:21:14.172 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1092250 /var/tmp/bdevperf.sock 00:21:14.172 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1092250 ']' 00:21:14.172 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.172 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:14.172 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.172 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.172 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:14.172 "subsystems": [ 00:21:14.172 { 00:21:14.172 "subsystem": "keyring", 00:21:14.172 "config": [ 00:21:14.172 { 00:21:14.172 "method": "keyring_file_add_key", 00:21:14.172 "params": { 00:21:14.172 "name": "key0", 00:21:14.172 "path": "/tmp/tmp.120wRmBAVN" 00:21:14.172 } 00:21:14.172 } 00:21:14.172 ] 00:21:14.172 }, 00:21:14.172 { 00:21:14.172 "subsystem": "iobuf", 00:21:14.172 "config": [ 00:21:14.172 { 00:21:14.172 "method": "iobuf_set_options", 00:21:14.172 "params": { 00:21:14.172 "small_pool_count": 8192, 00:21:14.172 "large_pool_count": 1024, 00:21:14.172 "small_bufsize": 8192, 00:21:14.172 "large_bufsize": 135168, 00:21:14.172 "enable_numa": false 00:21:14.172 } 00:21:14.172 } 00:21:14.172 ] 00:21:14.172 }, 00:21:14.172 { 00:21:14.172 "subsystem": "sock", 00:21:14.172 "config": [ 00:21:14.172 { 00:21:14.172 "method": "sock_set_default_impl", 00:21:14.172 "params": { 00:21:14.172 "impl_name": "posix" 00:21:14.172 } 00:21:14.172 }, 00:21:14.172 { 00:21:14.172 "method": "sock_impl_set_options", 00:21:14.172 "params": { 00:21:14.172 "impl_name": "ssl", 00:21:14.172 "recv_buf_size": 4096, 00:21:14.172 "send_buf_size": 4096, 00:21:14.172 "enable_recv_pipe": true, 00:21:14.172 "enable_quickack": false, 00:21:14.172 "enable_placement_id": 0, 00:21:14.172 "enable_zerocopy_send_server": true, 00:21:14.172 "enable_zerocopy_send_client": false, 00:21:14.172 "zerocopy_threshold": 0, 00:21:14.172 "tls_version": 0, 00:21:14.172 "enable_ktls": false 00:21:14.172 } 00:21:14.172 }, 00:21:14.172 { 00:21:14.172 "method": "sock_impl_set_options", 00:21:14.172 "params": { 00:21:14.172 "impl_name": "posix", 00:21:14.172 "recv_buf_size": 2097152, 00:21:14.172 "send_buf_size": 2097152, 00:21:14.172 "enable_recv_pipe": true, 00:21:14.172 "enable_quickack": false, 00:21:14.172 "enable_placement_id": 0, 00:21:14.172 "enable_zerocopy_send_server": true, 00:21:14.172 "enable_zerocopy_send_client": false, 00:21:14.172 "zerocopy_threshold": 0, 00:21:14.172 "tls_version": 0, 00:21:14.172 "enable_ktls": false 00:21:14.172 } 00:21:14.172 } 00:21:14.172 ] 00:21:14.172 }, 00:21:14.172 { 00:21:14.172 "subsystem": "vmd", 00:21:14.172 "config": [] 00:21:14.172 }, 00:21:14.172 { 00:21:14.172 "subsystem": "accel", 00:21:14.172 "config": [ 00:21:14.172 { 00:21:14.172 "method": "accel_set_options", 00:21:14.172 "params": { 00:21:14.172 "small_cache_size": 128, 00:21:14.172 "large_cache_size": 16, 00:21:14.172 "task_count": 2048, 00:21:14.172 "sequence_count": 2048, 00:21:14.172 "buf_count": 2048 00:21:14.172 } 00:21:14.172 } 00:21:14.172 ] 00:21:14.172 }, 00:21:14.172 { 00:21:14.172 "subsystem": "bdev", 00:21:14.172 "config": [ 00:21:14.172 { 00:21:14.172 "method": "bdev_set_options", 00:21:14.172 "params": { 00:21:14.172 "bdev_io_pool_size": 65535, 00:21:14.172 "bdev_io_cache_size": 256, 00:21:14.172 "bdev_auto_examine": true, 00:21:14.172 "iobuf_small_cache_size": 128, 00:21:14.172 "iobuf_large_cache_size": 16 00:21:14.172 } 00:21:14.172 }, 00:21:14.172 { 00:21:14.172 "method": "bdev_raid_set_options", 00:21:14.172 "params": { 00:21:14.172 "process_window_size_kb": 1024, 00:21:14.172 "process_max_bandwidth_mb_sec": 0 00:21:14.172 } 00:21:14.172 }, 00:21:14.172 { 00:21:14.172 "method": "bdev_iscsi_set_options", 00:21:14.172 "params": { 00:21:14.172 "timeout_sec": 30 00:21:14.172 } 00:21:14.172 }, 00:21:14.172 { 00:21:14.172 "method": "bdev_nvme_set_options", 00:21:14.172 "params": { 00:21:14.172 "action_on_timeout": "none", 00:21:14.172 "timeout_us": 0, 00:21:14.172 "timeout_admin_us": 0, 00:21:14.172 "keep_alive_timeout_ms": 10000, 00:21:14.172 "arbitration_burst": 0, 00:21:14.172 "low_priority_weight": 0, 00:21:14.172 "medium_priority_weight": 0, 00:21:14.172 "high_priority_weight": 0, 00:21:14.172 "nvme_adminq_poll_period_us": 10000, 00:21:14.172 "nvme_ioq_poll_period_us": 0, 00:21:14.172 "io_queue_requests": 512, 00:21:14.172 "delay_cmd_submit": true, 00:21:14.172 "transport_retry_count": 4, 00:21:14.172 "bdev_retry_count": 3, 00:21:14.172 "transport_ack_timeout": 0, 00:21:14.172 "ctrlr_loss_timeout_sec": 0, 00:21:14.172 "reconnect_delay_sec": 0, 00:21:14.172 "fast_io_fail_timeout_sec": 0, 00:21:14.172 "disable_auto_failback": false, 00:21:14.172 "generate_uuids": false, 00:21:14.172 "transport_tos": 0, 00:21:14.172 "nvme_error_stat": false, 00:21:14.172 "rdma_srq_size": 0, 00:21:14.172 "io_path_stat": false, 00:21:14.172 "allow_accel_sequence": false, 00:21:14.172 "rdma_max_cq_size": 0, 00:21:14.172 "rdma_cm_event_timeout_ms": 0Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.172 , 00:21:14.173 "dhchap_digests": [ 00:21:14.173 "sha256", 00:21:14.173 "sha384", 00:21:14.173 "sha512" 00:21:14.173 ], 00:21:14.173 "dhchap_dhgroups": [ 00:21:14.173 "null", 00:21:14.173 "ffdhe2048", 00:21:14.173 "ffdhe3072", 00:21:14.173 "ffdhe4096", 00:21:14.173 "ffdhe6144", 00:21:14.173 "ffdhe8192" 00:21:14.173 ] 00:21:14.173 } 00:21:14.173 }, 00:21:14.173 { 00:21:14.173 "method": "bdev_nvme_attach_controller", 00:21:14.173 "params": { 00:21:14.173 "name": "nvme0", 00:21:14.173 "trtype": "TCP", 00:21:14.173 "adrfam": "IPv4", 00:21:14.173 "traddr": "10.0.0.2", 00:21:14.173 "trsvcid": "4420", 00:21:14.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.173 "prchk_reftag": false, 00:21:14.173 "prchk_guard": false, 00:21:14.173 "ctrlr_loss_timeout_sec": 0, 00:21:14.173 "reconnect_delay_sec": 0, 00:21:14.173 "fast_io_fail_timeout_sec": 0, 00:21:14.173 "psk": "key0", 00:21:14.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:14.173 "hdgst": false, 00:21:14.173 "ddgst": false, 00:21:14.173 "multipath": "multipath" 00:21:14.173 } 00:21:14.173 }, 00:21:14.173 { 00:21:14.173 "method": "bdev_nvme_set_hotplug", 00:21:14.173 "params": { 00:21:14.173 "period_us": 100000, 00:21:14.173 "enable": false 00:21:14.173 } 00:21:14.173 }, 00:21:14.173 { 00:21:14.173 "method": "bdev_enable_histogram", 00:21:14.173 "params": { 00:21:14.173 "name": "nvme0n1", 00:21:14.173 "enable": true 00:21:14.173 } 00:21:14.173 }, 00:21:14.173 { 00:21:14.173 "method": "bdev_wait_for_examine" 00:21:14.173 } 00:21:14.173 ] 00:21:14.173 }, 00:21:14.173 { 00:21:14.173 "subsystem": "nbd", 00:21:14.173 "config": [] 00:21:14.173 } 00:21:14.173 ] 00:21:14.173 }' 00:21:14.173 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.173 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.173 [2024-12-08 06:25:04.187792] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:21:14.173 [2024-12-08 06:25:04.187866] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092250 ] 00:21:14.173 [2024-12-08 06:25:04.252537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.433 [2024-12-08 06:25:04.309202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.433 [2024-12-08 06:25:04.491771] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:14.700 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.700 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:14.700 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:14.700 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:14.959 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.960 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:14.960 Running I/O for 1 seconds... 00:21:16.157 3206.00 IOPS, 12.52 MiB/s 00:21:16.157 Latency(us) 00:21:16.158 [2024-12-08T05:25:06.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.158 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:16.158 Verification LBA range: start 0x0 length 0x2000 00:21:16.158 nvme0n1 : 1.04 3218.20 12.57 0.00 0.00 39205.77 7573.05 67186.54 00:21:16.158 [2024-12-08T05:25:06.277Z] =================================================================================================================== 00:21:16.158 [2024-12-08T05:25:06.277Z] Total : 3218.20 12.57 0.00 0.00 39205.77 7573.05 67186.54 00:21:16.158 { 00:21:16.158 "results": [ 00:21:16.158 { 00:21:16.158 "job": "nvme0n1", 00:21:16.158 "core_mask": "0x2", 00:21:16.158 "workload": "verify", 00:21:16.158 "status": "finished", 00:21:16.158 "verify_range": { 00:21:16.158 "start": 0, 00:21:16.158 "length": 8192 00:21:16.158 }, 00:21:16.158 "queue_depth": 128, 00:21:16.158 "io_size": 4096, 00:21:16.158 "runtime": 1.035982, 00:21:16.158 "iops": 3218.2026328642773, 00:21:16.158 "mibps": 12.571104034626083, 00:21:16.158 "io_failed": 0, 00:21:16.158 "io_timeout": 0, 00:21:16.158 "avg_latency_us": 39205.770081539245, 00:21:16.158 "min_latency_us": 7573.0488888888885, 00:21:16.158 "max_latency_us": 67186.5362962963 00:21:16.158 } 00:21:16.158 ], 00:21:16.158 "core_count": 1 00:21:16.158 } 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:16.158 nvmf_trace.0 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1092250 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1092250 ']' 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1092250 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1092250 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1092250' 00:21:16.158 killing process with pid 1092250 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1092250 00:21:16.158 Received shutdown signal, test time was about 1.000000 seconds 00:21:16.158 00:21:16.158 Latency(us) 00:21:16.158 [2024-12-08T05:25:06.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.158 [2024-12-08T05:25:06.277Z] =================================================================================================================== 00:21:16.158 [2024-12-08T05:25:06.277Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.158 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1092250 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:16.417 rmmod nvme_tcp 00:21:16.417 rmmod nvme_fabrics 00:21:16.417 rmmod nvme_keyring 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1092097 ']' 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1092097 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1092097 ']' 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1092097 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1092097 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1092097' 00:21:16.417 killing process with pid 1092097 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1092097 00:21:16.417 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1092097 00:21:16.675 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:16.675 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:16.675 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:16.675 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:16.675 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:16.675 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:16.675 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:16.675 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:16.675 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:16.675 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.675 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.675 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Fup7OwbOQ1 /tmp/tmp.C1vWVCV4R9 /tmp/tmp.120wRmBAVN 00:21:19.218 00:21:19.218 real 1m23.140s 00:21:19.218 user 2m16.137s 00:21:19.218 sys 0m28.836s 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.218 ************************************ 00:21:19.218 END TEST nvmf_tls 00:21:19.218 ************************************ 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:19.218 ************************************ 00:21:19.218 START TEST nvmf_fips 00:21:19.218 ************************************ 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:19.218 * Looking for test storage... 00:21:19.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:19.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.218 --rc genhtml_branch_coverage=1 00:21:19.218 --rc genhtml_function_coverage=1 00:21:19.218 --rc genhtml_legend=1 00:21:19.218 --rc geninfo_all_blocks=1 00:21:19.218 --rc geninfo_unexecuted_blocks=1 00:21:19.218 00:21:19.218 ' 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:19.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.218 --rc genhtml_branch_coverage=1 00:21:19.218 --rc genhtml_function_coverage=1 00:21:19.218 --rc genhtml_legend=1 00:21:19.218 --rc geninfo_all_blocks=1 00:21:19.218 --rc geninfo_unexecuted_blocks=1 00:21:19.218 00:21:19.218 ' 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:19.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.218 --rc genhtml_branch_coverage=1 00:21:19.218 --rc genhtml_function_coverage=1 00:21:19.218 --rc genhtml_legend=1 00:21:19.218 --rc geninfo_all_blocks=1 00:21:19.218 --rc geninfo_unexecuted_blocks=1 00:21:19.218 00:21:19.218 ' 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:19.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.218 --rc genhtml_branch_coverage=1 00:21:19.218 --rc genhtml_function_coverage=1 00:21:19.218 --rc genhtml_legend=1 00:21:19.218 --rc geninfo_all_blocks=1 00:21:19.218 --rc geninfo_unexecuted_blocks=1 00:21:19.218 00:21:19.218 ' 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:19.218 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:19.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:19.219 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:19.219 Error setting digest 00:21:19.219 4092416A907F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:19.219 4092416A907F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:19.219 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:21.750 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:21.750 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:21.750 Found net devices under 0000:84:00.0: cvl_0_0 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:21.750 Found net devices under 0000:84:00.1: cvl_0_1 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:21.750 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:21.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:21:21.750 00:21:21.750 --- 10.0.0.2 ping statistics --- 00:21:21.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.751 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:21.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:21:21.751 00:21:21.751 --- 10.0.0.1 ping statistics --- 00:21:21.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.751 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1094514 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1094514 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1094514 ']' 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:21.751 [2024-12-08 06:25:11.525091] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:21:21.751 [2024-12-08 06:25:11.525186] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.751 [2024-12-08 06:25:11.597403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.751 [2024-12-08 06:25:11.652727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.751 [2024-12-08 06:25:11.652783] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.751 [2024-12-08 06:25:11.652811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.751 [2024-12-08 06:25:11.652823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.751 [2024-12-08 06:25:11.652833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.751 [2024-12-08 06:25:11.653460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.zc2 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.zc2 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.zc2 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.zc2 00:21:21.751 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:22.009 [2024-12-08 06:25:12.089122] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.010 [2024-12-08 06:25:12.105139] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.010 [2024-12-08 06:25:12.105403] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.269 malloc0 00:21:22.269 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:22.269 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1094658 00:21:22.269 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:22.269 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1094658 /var/tmp/bdevperf.sock 00:21:22.269 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1094658 ']' 00:21:22.269 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.269 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.269 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.269 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.269 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:22.269 [2024-12-08 06:25:12.238366] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:21:22.269 [2024-12-08 06:25:12.238445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1094658 ] 00:21:22.269 [2024-12-08 06:25:12.304938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.269 [2024-12-08 06:25:12.362543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.527 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.527 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:22.527 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.zc2 00:21:22.784 06:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:23.043 [2024-12-08 06:25:12.986638] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.043 TLSTESTn1 00:21:23.043 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:23.301 Running I/O for 10 seconds... 00:21:25.170 3628.00 IOPS, 14.17 MiB/s [2024-12-08T05:25:16.225Z] 3661.50 IOPS, 14.30 MiB/s [2024-12-08T05:25:17.604Z] 3623.67 IOPS, 14.15 MiB/s [2024-12-08T05:25:18.543Z] 3655.00 IOPS, 14.28 MiB/s [2024-12-08T05:25:19.486Z] 3641.20 IOPS, 14.22 MiB/s [2024-12-08T05:25:20.423Z] 3634.00 IOPS, 14.20 MiB/s [2024-12-08T05:25:21.363Z] 3621.43 IOPS, 14.15 MiB/s [2024-12-08T05:25:22.301Z] 3612.62 IOPS, 14.11 MiB/s [2024-12-08T05:25:23.238Z] 3612.44 IOPS, 14.11 MiB/s [2024-12-08T05:25:23.238Z] 3606.40 IOPS, 14.09 MiB/s 00:21:33.119 Latency(us) 00:21:33.119 [2024-12-08T05:25:23.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.119 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:33.119 Verification LBA range: start 0x0 length 0x2000 00:21:33.119 TLSTESTn1 : 10.02 3610.30 14.10 0.00 0.00 35397.96 8786.68 31457.28 00:21:33.119 [2024-12-08T05:25:23.238Z] =================================================================================================================== 00:21:33.119 [2024-12-08T05:25:23.238Z] Total : 3610.30 14.10 0.00 0.00 35397.96 8786.68 31457.28 00:21:33.119 { 00:21:33.119 "results": [ 00:21:33.119 { 00:21:33.119 "job": "TLSTESTn1", 00:21:33.119 "core_mask": "0x4", 00:21:33.119 "workload": "verify", 00:21:33.119 "status": "finished", 00:21:33.119 "verify_range": { 00:21:33.119 "start": 0, 00:21:33.119 "length": 8192 00:21:33.119 }, 00:21:33.119 "queue_depth": 128, 00:21:33.119 "io_size": 4096, 00:21:33.119 "runtime": 10.024103, 00:21:33.119 "iops": 3610.298098493202, 00:21:33.119 "mibps": 14.10272694723907, 00:21:33.119 "io_failed": 0, 00:21:33.119 "io_timeout": 0, 00:21:33.119 "avg_latency_us": 35397.9619000133, 00:21:33.119 "min_latency_us": 8786.678518518518, 00:21:33.119 "max_latency_us": 31457.28 00:21:33.119 } 00:21:33.119 ], 00:21:33.119 "core_count": 1 00:21:33.119 } 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:33.385 nvmf_trace.0 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1094658 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1094658 ']' 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1094658 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1094658 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1094658' 00:21:33.385 killing process with pid 1094658 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1094658 00:21:33.385 Received shutdown signal, test time was about 10.000000 seconds 00:21:33.385 00:21:33.385 Latency(us) 00:21:33.385 [2024-12-08T05:25:23.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.385 [2024-12-08T05:25:23.504Z] =================================================================================================================== 00:21:33.385 [2024-12-08T05:25:23.504Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:33.385 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1094658 00:21:33.645 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:33.645 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:33.645 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:33.645 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:33.645 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:33.645 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:33.645 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:33.645 rmmod nvme_tcp 00:21:33.645 rmmod nvme_fabrics 00:21:33.645 rmmod nvme_keyring 00:21:33.645 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:33.645 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:33.645 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:33.645 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1094514 ']' 00:21:33.645 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1094514 00:21:33.645 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1094514 ']' 00:21:33.645 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1094514 00:21:33.645 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:33.645 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.645 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1094514 00:21:33.645 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:33.646 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:33.646 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1094514' 00:21:33.646 killing process with pid 1094514 00:21:33.646 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1094514 00:21:33.646 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1094514 00:21:33.905 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:33.905 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:33.905 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:33.905 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:33.905 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:33.905 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:33.905 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:33.905 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:33.905 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:33.905 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.905 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.905 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.440 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:36.440 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.zc2 00:21:36.440 00:21:36.440 real 0m17.186s 00:21:36.440 user 0m21.291s 00:21:36.440 sys 0m6.878s 00:21:36.440 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:36.440 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:36.440 ************************************ 00:21:36.440 END TEST nvmf_fips 00:21:36.440 ************************************ 00:21:36.440 06:25:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:36.440 06:25:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:36.440 06:25:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:36.440 06:25:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:36.440 ************************************ 00:21:36.440 START TEST nvmf_control_msg_list 00:21:36.440 ************************************ 00:21:36.440 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:36.440 * Looking for test storage... 00:21:36.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:36.440 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:36.440 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:21:36.440 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:36.440 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:36.440 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:36.440 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:36.440 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:36.440 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:36.440 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:36.440 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:36.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.441 --rc genhtml_branch_coverage=1 00:21:36.441 --rc genhtml_function_coverage=1 00:21:36.441 --rc genhtml_legend=1 00:21:36.441 --rc geninfo_all_blocks=1 00:21:36.441 --rc geninfo_unexecuted_blocks=1 00:21:36.441 00:21:36.441 ' 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:36.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.441 --rc genhtml_branch_coverage=1 00:21:36.441 --rc genhtml_function_coverage=1 00:21:36.441 --rc genhtml_legend=1 00:21:36.441 --rc geninfo_all_blocks=1 00:21:36.441 --rc geninfo_unexecuted_blocks=1 00:21:36.441 00:21:36.441 ' 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:36.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.441 --rc genhtml_branch_coverage=1 00:21:36.441 --rc genhtml_function_coverage=1 00:21:36.441 --rc genhtml_legend=1 00:21:36.441 --rc geninfo_all_blocks=1 00:21:36.441 --rc geninfo_unexecuted_blocks=1 00:21:36.441 00:21:36.441 ' 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:36.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.441 --rc genhtml_branch_coverage=1 00:21:36.441 --rc genhtml_function_coverage=1 00:21:36.441 --rc genhtml_legend=1 00:21:36.441 --rc geninfo_all_blocks=1 00:21:36.441 --rc geninfo_unexecuted_blocks=1 00:21:36.441 00:21:36.441 ' 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.441 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:36.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:36.442 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:38.351 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.351 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:38.351 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:38.351 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:38.351 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:38.351 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:38.351 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:38.351 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:38.351 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:38.351 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:38.351 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:38.351 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:38.351 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:38.351 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:38.351 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:38.351 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.351 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:38.352 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:38.352 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:38.352 Found net devices under 0000:84:00.0: cvl_0_0 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:38.352 Found net devices under 0000:84:00.1: cvl_0_1 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:38.352 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:38.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:21:38.611 00:21:38.611 --- 10.0.0.2 ping statistics --- 00:21:38.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.611 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:21:38.611 00:21:38.611 --- 10.0.0.1 ping statistics --- 00:21:38.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.611 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1097942 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1097942 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1097942 ']' 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.611 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:38.611 [2024-12-08 06:25:28.626897] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:21:38.611 [2024-12-08 06:25:28.626992] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.611 [2024-12-08 06:25:28.699530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.869 [2024-12-08 06:25:28.754610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.869 [2024-12-08 06:25:28.754665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.869 [2024-12-08 06:25:28.754693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.869 [2024-12-08 06:25:28.754705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.869 [2024-12-08 06:25:28.754715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.869 [2024-12-08 06:25:28.755427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:38.869 [2024-12-08 06:25:28.887660] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:38.869 Malloc0 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:38.869 [2024-12-08 06:25:28.926625] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.869 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1098015 00:21:38.870 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:38.870 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1098017 00:21:38.870 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:38.870 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1098019 00:21:38.870 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:38.870 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1098015 00:21:39.129 [2024-12-08 06:25:29.005655] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:39.129 [2024-12-08 06:25:29.006030] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:39.129 [2024-12-08 06:25:29.006289] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:40.069 Initializing NVMe Controllers 00:21:40.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:40.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:40.070 Initialization complete. Launching workers. 00:21:40.070 ======================================================== 00:21:40.070 Latency(us) 00:21:40.070 Device Information : IOPS MiB/s Average min max 00:21:40.070 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40894.73 40786.59 41001.19 00:21:40.070 ======================================================== 00:21:40.070 Total : 25.00 0.10 40894.73 40786.59 41001.19 00:21:40.070 00:21:40.070 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1098017 00:21:40.070 Initializing NVMe Controllers 00:21:40.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:40.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:40.070 Initialization complete. Launching workers. 00:21:40.070 ======================================================== 00:21:40.070 Latency(us) 00:21:40.070 Device Information : IOPS MiB/s Average min max 00:21:40.070 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40907.29 40812.17 41188.95 00:21:40.070 ======================================================== 00:21:40.070 Total : 25.00 0.10 40907.29 40812.17 41188.95 00:21:40.070 00:21:40.070 Initializing NVMe Controllers 00:21:40.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:40.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:40.070 Initialization complete. Launching workers. 00:21:40.070 ======================================================== 00:21:40.070 Latency(us) 00:21:40.070 Device Information : IOPS MiB/s Average min max 00:21:40.070 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40995.99 40809.98 41971.79 00:21:40.070 ======================================================== 00:21:40.070 Total : 25.00 0.10 40995.99 40809.98 41971.79 00:21:40.070 00:21:40.070 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1098019 00:21:40.070 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:40.070 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:40.070 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:40.070 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:40.070 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:40.070 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:40.070 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:40.070 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:40.070 rmmod nvme_tcp 00:21:40.070 rmmod nvme_fabrics 00:21:40.070 rmmod nvme_keyring 00:21:40.070 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:40.070 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:40.070 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:40.070 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1097942 ']' 00:21:40.070 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1097942 00:21:40.070 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1097942 ']' 00:21:40.070 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1097942 00:21:40.070 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1097942 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1097942' 00:21:40.329 killing process with pid 1097942 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1097942 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1097942 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.329 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.869 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:42.869 00:21:42.869 real 0m6.463s 00:21:42.870 user 0m5.814s 00:21:42.870 sys 0m2.571s 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:42.870 ************************************ 00:21:42.870 END TEST nvmf_control_msg_list 00:21:42.870 ************************************ 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:42.870 ************************************ 00:21:42.870 START TEST nvmf_wait_for_buf 00:21:42.870 ************************************ 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:42.870 * Looking for test storage... 00:21:42.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:42.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.870 --rc genhtml_branch_coverage=1 00:21:42.870 --rc genhtml_function_coverage=1 00:21:42.870 --rc genhtml_legend=1 00:21:42.870 --rc geninfo_all_blocks=1 00:21:42.870 --rc geninfo_unexecuted_blocks=1 00:21:42.870 00:21:42.870 ' 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:42.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.870 --rc genhtml_branch_coverage=1 00:21:42.870 --rc genhtml_function_coverage=1 00:21:42.870 --rc genhtml_legend=1 00:21:42.870 --rc geninfo_all_blocks=1 00:21:42.870 --rc geninfo_unexecuted_blocks=1 00:21:42.870 00:21:42.870 ' 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:42.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.870 --rc genhtml_branch_coverage=1 00:21:42.870 --rc genhtml_function_coverage=1 00:21:42.870 --rc genhtml_legend=1 00:21:42.870 --rc geninfo_all_blocks=1 00:21:42.870 --rc geninfo_unexecuted_blocks=1 00:21:42.870 00:21:42.870 ' 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:42.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.870 --rc genhtml_branch_coverage=1 00:21:42.870 --rc genhtml_function_coverage=1 00:21:42.870 --rc genhtml_legend=1 00:21:42.870 --rc geninfo_all_blocks=1 00:21:42.870 --rc geninfo_unexecuted_blocks=1 00:21:42.870 00:21:42.870 ' 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.870 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:42.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:42.871 06:25:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:44.796 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.796 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:44.796 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:44.796 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:44.796 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:44.796 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:44.796 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:44.796 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:44.796 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:44.796 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:44.796 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:44.796 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:44.796 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:44.796 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:44.796 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:44.796 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.796 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:44.797 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:44.797 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:44.797 Found net devices under 0000:84:00.0: cvl_0_0 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:44.797 Found net devices under 0000:84:00.1: cvl_0_1 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:44.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:21:44.797 00:21:44.797 --- 10.0.0.2 ping statistics --- 00:21:44.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.797 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:44.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:21:44.797 00:21:44.797 --- 10.0.0.1 ping statistics --- 00:21:44.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.797 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:44.797 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:45.057 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:45.057 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:45.057 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:45.057 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:45.057 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1100173 00:21:45.057 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:45.057 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1100173 00:21:45.057 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1100173 ']' 00:21:45.057 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.057 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.057 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.057 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.057 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:45.057 [2024-12-08 06:25:34.985353] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:21:45.057 [2024-12-08 06:25:34.985431] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.057 [2024-12-08 06:25:35.059423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.057 [2024-12-08 06:25:35.115919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.057 [2024-12-08 06:25:35.115970] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.057 [2024-12-08 06:25:35.116006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.057 [2024-12-08 06:25:35.116018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.057 [2024-12-08 06:25:35.116027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.057 [2024-12-08 06:25:35.116620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.315 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.315 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:45.315 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:45.315 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:45.315 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:45.315 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.315 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:45.315 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:45.315 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:45.315 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:45.316 Malloc0 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:45.316 [2024-12-08 06:25:35.354564] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:45.316 [2024-12-08 06:25:35.378815] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.316 06:25:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:45.574 [2024-12-08 06:25:35.464861] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:46.999 Initializing NVMe Controllers 00:21:46.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:46.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:46.999 Initialization complete. Launching workers. 00:21:46.999 ======================================================== 00:21:46.999 Latency(us) 00:21:46.999 Device Information : IOPS MiB/s Average min max 00:21:46.999 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 105.00 13.12 39595.97 24020.88 110665.13 00:21:46.999 ======================================================== 00:21:46.999 Total : 105.00 13.12 39595.97 24020.88 110665.13 00:21:46.999 00:21:46.999 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:46.999 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.000 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:47.000 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:47.000 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.000 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1654 00:21:47.000 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1654 -eq 0 ]] 00:21:47.000 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:47.000 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:47.000 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:47.000 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:47.000 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:47.000 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:47.000 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:47.000 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:47.000 rmmod nvme_tcp 00:21:47.000 rmmod nvme_fabrics 00:21:47.000 rmmod nvme_keyring 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1100173 ']' 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1100173 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1100173 ']' 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1100173 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1100173 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1100173' 00:21:47.257 killing process with pid 1100173 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1100173 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1100173 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.257 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.840 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:49.840 00:21:49.840 real 0m6.858s 00:21:49.840 user 0m3.232s 00:21:49.840 sys 0m2.083s 00:21:49.840 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:49.840 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:49.840 ************************************ 00:21:49.840 END TEST nvmf_wait_for_buf 00:21:49.840 ************************************ 00:21:49.840 06:25:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:49.840 06:25:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:49.840 06:25:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:49.840 06:25:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:49.840 06:25:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:49.840 06:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:51.809 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.809 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:51.809 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:51.810 Found net devices under 0000:84:00.0: cvl_0_0 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:51.810 Found net devices under 0000:84:00.1: cvl_0_1 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:51.810 ************************************ 00:21:51.810 START TEST nvmf_perf_adq 00:21:51.810 ************************************ 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:51.810 * Looking for test storage... 00:21:51.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:51.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.810 --rc genhtml_branch_coverage=1 00:21:51.810 --rc genhtml_function_coverage=1 00:21:51.810 --rc genhtml_legend=1 00:21:51.810 --rc geninfo_all_blocks=1 00:21:51.810 --rc geninfo_unexecuted_blocks=1 00:21:51.810 00:21:51.810 ' 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:51.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.810 --rc genhtml_branch_coverage=1 00:21:51.810 --rc genhtml_function_coverage=1 00:21:51.810 --rc genhtml_legend=1 00:21:51.810 --rc geninfo_all_blocks=1 00:21:51.810 --rc geninfo_unexecuted_blocks=1 00:21:51.810 00:21:51.810 ' 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:51.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.810 --rc genhtml_branch_coverage=1 00:21:51.810 --rc genhtml_function_coverage=1 00:21:51.810 --rc genhtml_legend=1 00:21:51.810 --rc geninfo_all_blocks=1 00:21:51.810 --rc geninfo_unexecuted_blocks=1 00:21:51.810 00:21:51.810 ' 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:51.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.810 --rc genhtml_branch_coverage=1 00:21:51.810 --rc genhtml_function_coverage=1 00:21:51.810 --rc genhtml_legend=1 00:21:51.810 --rc geninfo_all_blocks=1 00:21:51.810 --rc geninfo_unexecuted_blocks=1 00:21:51.810 00:21:51.810 ' 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:51.810 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:51.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:51.811 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:53.717 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:53.717 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:53.717 Found net devices under 0000:84:00.0: cvl_0_0 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:53.717 Found net devices under 0000:84:00.1: cvl_0_1 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.717 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:53.718 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.718 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:53.718 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:53.718 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:53.718 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:53.718 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:54.655 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:56.553 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:01.833 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:01.833 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:01.833 Found net devices under 0000:84:00.0: cvl_0_0 00:22:01.833 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:01.834 Found net devices under 0000:84:00.1: cvl_0_1 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:01.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:22:01.834 00:22:01.834 --- 10.0.0.2 ping statistics --- 00:22:01.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.834 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:22:01.834 00:22:01.834 --- 10.0.0.1 ping statistics --- 00:22:01.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.834 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1104932 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1104932 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1104932 ']' 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.834 [2024-12-08 06:25:51.579369] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:22:01.834 [2024-12-08 06:25:51.579454] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.834 [2024-12-08 06:25:51.649472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:01.834 [2024-12-08 06:25:51.705410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.834 [2024-12-08 06:25:51.705472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.834 [2024-12-08 06:25:51.705499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.834 [2024-12-08 06:25:51.705510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.834 [2024-12-08 06:25:51.705519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.834 [2024-12-08 06:25:51.707264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.834 [2024-12-08 06:25:51.707369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.834 [2024-12-08 06:25:51.707463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:01.834 [2024-12-08 06:25:51.707471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.834 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.093 [2024-12-08 06:25:52.047555] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.093 Malloc1 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.093 [2024-12-08 06:25:52.110101] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1105082 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:02.093 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:04.022 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:04.022 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.022 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.279 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.279 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:04.279 "tick_rate": 2700000000, 00:22:04.279 "poll_groups": [ 00:22:04.279 { 00:22:04.279 "name": "nvmf_tgt_poll_group_000", 00:22:04.279 "admin_qpairs": 1, 00:22:04.279 "io_qpairs": 1, 00:22:04.279 "current_admin_qpairs": 1, 00:22:04.279 "current_io_qpairs": 1, 00:22:04.279 "pending_bdev_io": 0, 00:22:04.279 "completed_nvme_io": 19422, 00:22:04.279 "transports": [ 00:22:04.279 { 00:22:04.279 "trtype": "TCP" 00:22:04.279 } 00:22:04.279 ] 00:22:04.279 }, 00:22:04.279 { 00:22:04.279 "name": "nvmf_tgt_poll_group_001", 00:22:04.279 "admin_qpairs": 0, 00:22:04.279 "io_qpairs": 1, 00:22:04.279 "current_admin_qpairs": 0, 00:22:04.279 "current_io_qpairs": 1, 00:22:04.279 "pending_bdev_io": 0, 00:22:04.279 "completed_nvme_io": 19529, 00:22:04.279 "transports": [ 00:22:04.279 { 00:22:04.279 "trtype": "TCP" 00:22:04.279 } 00:22:04.279 ] 00:22:04.279 }, 00:22:04.279 { 00:22:04.279 "name": "nvmf_tgt_poll_group_002", 00:22:04.279 "admin_qpairs": 0, 00:22:04.279 "io_qpairs": 1, 00:22:04.279 "current_admin_qpairs": 0, 00:22:04.279 "current_io_qpairs": 1, 00:22:04.279 "pending_bdev_io": 0, 00:22:04.279 "completed_nvme_io": 19969, 00:22:04.279 "transports": [ 00:22:04.279 { 00:22:04.279 "trtype": "TCP" 00:22:04.279 } 00:22:04.279 ] 00:22:04.279 }, 00:22:04.279 { 00:22:04.279 "name": "nvmf_tgt_poll_group_003", 00:22:04.279 "admin_qpairs": 0, 00:22:04.279 "io_qpairs": 1, 00:22:04.279 "current_admin_qpairs": 0, 00:22:04.279 "current_io_qpairs": 1, 00:22:04.279 "pending_bdev_io": 0, 00:22:04.279 "completed_nvme_io": 19137, 00:22:04.279 "transports": [ 00:22:04.279 { 00:22:04.279 "trtype": "TCP" 00:22:04.279 } 00:22:04.279 ] 00:22:04.279 } 00:22:04.279 ] 00:22:04.279 }' 00:22:04.279 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:04.279 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:04.279 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:04.279 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:04.279 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1105082 00:22:12.396 Initializing NVMe Controllers 00:22:12.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:12.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:12.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:12.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:12.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:12.396 Initialization complete. Launching workers. 00:22:12.396 ======================================================== 00:22:12.396 Latency(us) 00:22:12.396 Device Information : IOPS MiB/s Average min max 00:22:12.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10161.88 39.69 6300.02 2055.59 10796.86 00:22:12.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10338.17 40.38 6190.01 2163.27 10518.52 00:22:12.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10507.07 41.04 6092.31 2541.47 9920.49 00:22:12.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10265.27 40.10 6236.17 2458.15 10708.07 00:22:12.396 ======================================================== 00:22:12.396 Total : 41272.40 161.22 6203.70 2055.59 10796.86 00:22:12.396 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:12.396 rmmod nvme_tcp 00:22:12.396 rmmod nvme_fabrics 00:22:12.396 rmmod nvme_keyring 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1104932 ']' 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1104932 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1104932 ']' 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1104932 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1104932 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1104932' 00:22:12.396 killing process with pid 1104932 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1104932 00:22:12.396 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1104932 00:22:12.655 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:12.655 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:12.655 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:12.655 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:12.655 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:12.655 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:12.655 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:12.655 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:12.655 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:12.655 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.655 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.655 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.565 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:14.565 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:14.565 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:14.565 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:15.500 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:17.399 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:22.676 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:22.677 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:22.677 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:22.677 Found net devices under 0000:84:00.0: cvl_0_0 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:22.677 Found net devices under 0000:84:00.1: cvl_0_1 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:22.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:22:22.677 00:22:22.677 --- 10.0.0.2 ping statistics --- 00:22:22.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.677 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:22:22.677 00:22:22.677 --- 10.0.0.1 ping statistics --- 00:22:22.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.677 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:22.677 net.core.busy_poll = 1 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:22.677 net.core.busy_read = 1 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1107693 00:22:22.677 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:22.678 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1107693 00:22:22.678 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1107693 ']' 00:22:22.678 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.678 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:22.678 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.678 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:22.678 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.678 [2024-12-08 06:26:12.676778] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:22:22.678 [2024-12-08 06:26:12.676857] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.678 [2024-12-08 06:26:12.749159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:22.936 [2024-12-08 06:26:12.809381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.936 [2024-12-08 06:26:12.809431] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.936 [2024-12-08 06:26:12.809459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.936 [2024-12-08 06:26:12.809470] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.936 [2024-12-08 06:26:12.809480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.936 [2024-12-08 06:26:12.813741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.936 [2024-12-08 06:26:12.813790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.937 [2024-12-08 06:26:12.813867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:22.937 [2024-12-08 06:26:12.813871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.937 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.195 [2024-12-08 06:26:13.068217] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.195 Malloc1 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.195 [2024-12-08 06:26:13.127299] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1107728 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:23.195 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:25.097 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:25.097 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.097 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.097 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.097 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:25.097 "tick_rate": 2700000000, 00:22:25.097 "poll_groups": [ 00:22:25.097 { 00:22:25.097 "name": "nvmf_tgt_poll_group_000", 00:22:25.097 "admin_qpairs": 1, 00:22:25.097 "io_qpairs": 1, 00:22:25.097 "current_admin_qpairs": 1, 00:22:25.097 "current_io_qpairs": 1, 00:22:25.097 "pending_bdev_io": 0, 00:22:25.097 "completed_nvme_io": 24186, 00:22:25.097 "transports": [ 00:22:25.097 { 00:22:25.097 "trtype": "TCP" 00:22:25.097 } 00:22:25.097 ] 00:22:25.097 }, 00:22:25.097 { 00:22:25.097 "name": "nvmf_tgt_poll_group_001", 00:22:25.097 "admin_qpairs": 0, 00:22:25.097 "io_qpairs": 3, 00:22:25.097 "current_admin_qpairs": 0, 00:22:25.097 "current_io_qpairs": 3, 00:22:25.097 "pending_bdev_io": 0, 00:22:25.097 "completed_nvme_io": 25714, 00:22:25.097 "transports": [ 00:22:25.097 { 00:22:25.097 "trtype": "TCP" 00:22:25.097 } 00:22:25.097 ] 00:22:25.097 }, 00:22:25.097 { 00:22:25.097 "name": "nvmf_tgt_poll_group_002", 00:22:25.097 "admin_qpairs": 0, 00:22:25.097 "io_qpairs": 0, 00:22:25.097 "current_admin_qpairs": 0, 00:22:25.097 "current_io_qpairs": 0, 00:22:25.097 "pending_bdev_io": 0, 00:22:25.097 "completed_nvme_io": 0, 00:22:25.097 "transports": [ 00:22:25.097 { 00:22:25.097 "trtype": "TCP" 00:22:25.097 } 00:22:25.097 ] 00:22:25.097 }, 00:22:25.097 { 00:22:25.097 "name": "nvmf_tgt_poll_group_003", 00:22:25.097 "admin_qpairs": 0, 00:22:25.097 "io_qpairs": 0, 00:22:25.097 "current_admin_qpairs": 0, 00:22:25.097 "current_io_qpairs": 0, 00:22:25.097 "pending_bdev_io": 0, 00:22:25.097 "completed_nvme_io": 0, 00:22:25.097 "transports": [ 00:22:25.097 { 00:22:25.097 "trtype": "TCP" 00:22:25.097 } 00:22:25.097 ] 00:22:25.097 } 00:22:25.097 ] 00:22:25.097 }' 00:22:25.097 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:25.097 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:25.097 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:25.097 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:25.097 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1107728 00:22:33.310 Initializing NVMe Controllers 00:22:33.310 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:33.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:33.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:33.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:33.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:33.310 Initialization complete. Launching workers. 00:22:33.310 ======================================================== 00:22:33.310 Latency(us) 00:22:33.310 Device Information : IOPS MiB/s Average min max 00:22:33.310 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4087.90 15.97 15704.00 2014.04 62860.54 00:22:33.310 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5033.10 19.66 12720.85 1823.18 61069.29 00:22:33.310 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12800.20 50.00 4999.74 1800.11 47106.57 00:22:33.310 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4659.20 18.20 13740.48 1902.24 61795.75 00:22:33.310 ======================================================== 00:22:33.310 Total : 26580.40 103.83 9640.15 1800.11 62860.54 00:22:33.310 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:33.310 rmmod nvme_tcp 00:22:33.310 rmmod nvme_fabrics 00:22:33.310 rmmod nvme_keyring 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1107693 ']' 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1107693 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1107693 ']' 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1107693 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1107693 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1107693' 00:22:33.310 killing process with pid 1107693 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1107693 00:22:33.310 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1107693 00:22:33.571 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:33.571 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:33.571 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:33.571 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:33.571 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:33.571 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:33.571 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:33.571 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:33.571 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:33.571 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.571 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.571 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:36.863 00:22:36.863 real 0m45.215s 00:22:36.863 user 2m41.121s 00:22:36.863 sys 0m9.655s 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.863 ************************************ 00:22:36.863 END TEST nvmf_perf_adq 00:22:36.863 ************************************ 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:36.863 ************************************ 00:22:36.863 START TEST nvmf_shutdown 00:22:36.863 ************************************ 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:36.863 * Looking for test storage... 00:22:36.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:36.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.863 --rc genhtml_branch_coverage=1 00:22:36.863 --rc genhtml_function_coverage=1 00:22:36.863 --rc genhtml_legend=1 00:22:36.863 --rc geninfo_all_blocks=1 00:22:36.863 --rc geninfo_unexecuted_blocks=1 00:22:36.863 00:22:36.863 ' 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:36.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.863 --rc genhtml_branch_coverage=1 00:22:36.863 --rc genhtml_function_coverage=1 00:22:36.863 --rc genhtml_legend=1 00:22:36.863 --rc geninfo_all_blocks=1 00:22:36.863 --rc geninfo_unexecuted_blocks=1 00:22:36.863 00:22:36.863 ' 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:36.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.863 --rc genhtml_branch_coverage=1 00:22:36.863 --rc genhtml_function_coverage=1 00:22:36.863 --rc genhtml_legend=1 00:22:36.863 --rc geninfo_all_blocks=1 00:22:36.863 --rc geninfo_unexecuted_blocks=1 00:22:36.863 00:22:36.863 ' 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:36.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.863 --rc genhtml_branch_coverage=1 00:22:36.863 --rc genhtml_function_coverage=1 00:22:36.863 --rc genhtml_legend=1 00:22:36.863 --rc geninfo_all_blocks=1 00:22:36.863 --rc geninfo_unexecuted_blocks=1 00:22:36.863 00:22:36.863 ' 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:36.863 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:36.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:36.864 ************************************ 00:22:36.864 START TEST nvmf_shutdown_tc1 00:22:36.864 ************************************ 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:36.864 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:39.403 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.403 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:39.404 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:39.404 Found net devices under 0000:84:00.0: cvl_0_0 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:39.404 Found net devices under 0000:84:00.1: cvl_0_1 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:39.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:22:39.404 00:22:39.404 --- 10.0.0.2 ping statistics --- 00:22:39.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.404 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:22:39.404 00:22:39.404 --- 10.0.0.1 ping statistics --- 00:22:39.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.404 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1111049 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1111049 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1111049 ']' 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.404 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.404 [2024-12-08 06:26:29.409754] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:22:39.404 [2024-12-08 06:26:29.409834] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.404 [2024-12-08 06:26:29.482279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.665 [2024-12-08 06:26:29.541187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.665 [2024-12-08 06:26:29.541237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.665 [2024-12-08 06:26:29.541266] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.665 [2024-12-08 06:26:29.541285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.665 [2024-12-08 06:26:29.541295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.665 [2024-12-08 06:26:29.543184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.665 [2024-12-08 06:26:29.543253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.665 [2024-12-08 06:26:29.543324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:39.665 [2024-12-08 06:26:29.543326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.665 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.665 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:39.665 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:39.665 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:39.665 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.665 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.665 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:39.665 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.665 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.665 [2024-12-08 06:26:29.692876] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.665 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.665 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:39.665 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:39.665 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.665 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.665 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:39.665 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.665 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.666 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.666 Malloc1 00:22:39.926 [2024-12-08 06:26:29.793119] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.926 Malloc2 00:22:39.926 Malloc3 00:22:39.926 Malloc4 00:22:39.926 Malloc5 00:22:39.926 Malloc6 00:22:40.187 Malloc7 00:22:40.187 Malloc8 00:22:40.187 Malloc9 00:22:40.187 Malloc10 00:22:40.187 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.187 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:40.187 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.187 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.187 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1111230 00:22:40.187 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1111230 /var/tmp/bdevperf.sock 00:22:40.187 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1111230 ']' 00:22:40.187 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:40.187 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:40.187 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.187 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.187 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:40.187 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.187 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:40.187 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.187 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.187 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.187 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.187 { 00:22:40.187 "params": { 00:22:40.187 "name": "Nvme$subsystem", 00:22:40.187 "trtype": "$TEST_TRANSPORT", 00:22:40.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.187 "adrfam": "ipv4", 00:22:40.187 "trsvcid": "$NVMF_PORT", 00:22:40.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.188 "hdgst": ${hdgst:-false}, 00:22:40.188 "ddgst": ${ddgst:-false} 00:22:40.188 }, 00:22:40.188 "method": "bdev_nvme_attach_controller" 00:22:40.188 } 00:22:40.188 EOF 00:22:40.188 )") 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.188 { 00:22:40.188 "params": { 00:22:40.188 "name": "Nvme$subsystem", 00:22:40.188 "trtype": "$TEST_TRANSPORT", 00:22:40.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.188 "adrfam": "ipv4", 00:22:40.188 "trsvcid": "$NVMF_PORT", 00:22:40.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.188 "hdgst": ${hdgst:-false}, 00:22:40.188 "ddgst": ${ddgst:-false} 00:22:40.188 }, 00:22:40.188 "method": "bdev_nvme_attach_controller" 00:22:40.188 } 00:22:40.188 EOF 00:22:40.188 )") 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.188 { 00:22:40.188 "params": { 00:22:40.188 "name": "Nvme$subsystem", 00:22:40.188 "trtype": "$TEST_TRANSPORT", 00:22:40.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.188 "adrfam": "ipv4", 00:22:40.188 "trsvcid": "$NVMF_PORT", 00:22:40.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.188 "hdgst": ${hdgst:-false}, 00:22:40.188 "ddgst": ${ddgst:-false} 00:22:40.188 }, 00:22:40.188 "method": "bdev_nvme_attach_controller" 00:22:40.188 } 00:22:40.188 EOF 00:22:40.188 )") 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.188 { 00:22:40.188 "params": { 00:22:40.188 "name": "Nvme$subsystem", 00:22:40.188 "trtype": "$TEST_TRANSPORT", 00:22:40.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.188 "adrfam": "ipv4", 00:22:40.188 "trsvcid": "$NVMF_PORT", 00:22:40.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.188 "hdgst": ${hdgst:-false}, 00:22:40.188 "ddgst": ${ddgst:-false} 00:22:40.188 }, 00:22:40.188 "method": "bdev_nvme_attach_controller" 00:22:40.188 } 00:22:40.188 EOF 00:22:40.188 )") 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.188 { 00:22:40.188 "params": { 00:22:40.188 "name": "Nvme$subsystem", 00:22:40.188 "trtype": "$TEST_TRANSPORT", 00:22:40.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.188 "adrfam": "ipv4", 00:22:40.188 "trsvcid": "$NVMF_PORT", 00:22:40.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.188 "hdgst": ${hdgst:-false}, 00:22:40.188 "ddgst": ${ddgst:-false} 00:22:40.188 }, 00:22:40.188 "method": "bdev_nvme_attach_controller" 00:22:40.188 } 00:22:40.188 EOF 00:22:40.188 )") 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.188 { 00:22:40.188 "params": { 00:22:40.188 "name": "Nvme$subsystem", 00:22:40.188 "trtype": "$TEST_TRANSPORT", 00:22:40.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.188 "adrfam": "ipv4", 00:22:40.188 "trsvcid": "$NVMF_PORT", 00:22:40.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.188 "hdgst": ${hdgst:-false}, 00:22:40.188 "ddgst": ${ddgst:-false} 00:22:40.188 }, 00:22:40.188 "method": "bdev_nvme_attach_controller" 00:22:40.188 } 00:22:40.188 EOF 00:22:40.188 )") 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.188 { 00:22:40.188 "params": { 00:22:40.188 "name": "Nvme$subsystem", 00:22:40.188 "trtype": "$TEST_TRANSPORT", 00:22:40.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.188 "adrfam": "ipv4", 00:22:40.188 "trsvcid": "$NVMF_PORT", 00:22:40.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.188 "hdgst": ${hdgst:-false}, 00:22:40.188 "ddgst": ${ddgst:-false} 00:22:40.188 }, 00:22:40.188 "method": "bdev_nvme_attach_controller" 00:22:40.188 } 00:22:40.188 EOF 00:22:40.188 )") 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.188 { 00:22:40.188 "params": { 00:22:40.188 "name": "Nvme$subsystem", 00:22:40.188 "trtype": "$TEST_TRANSPORT", 00:22:40.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.188 "adrfam": "ipv4", 00:22:40.188 "trsvcid": "$NVMF_PORT", 00:22:40.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.188 "hdgst": ${hdgst:-false}, 00:22:40.188 "ddgst": ${ddgst:-false} 00:22:40.188 }, 00:22:40.188 "method": "bdev_nvme_attach_controller" 00:22:40.188 } 00:22:40.188 EOF 00:22:40.188 )") 00:22:40.188 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.189 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.189 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.189 { 00:22:40.189 "params": { 00:22:40.189 "name": "Nvme$subsystem", 00:22:40.189 "trtype": "$TEST_TRANSPORT", 00:22:40.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.189 "adrfam": "ipv4", 00:22:40.189 "trsvcid": "$NVMF_PORT", 00:22:40.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.189 "hdgst": ${hdgst:-false}, 00:22:40.189 "ddgst": ${ddgst:-false} 00:22:40.189 }, 00:22:40.189 "method": "bdev_nvme_attach_controller" 00:22:40.189 } 00:22:40.189 EOF 00:22:40.189 )") 00:22:40.189 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.189 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.189 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.189 { 00:22:40.189 "params": { 00:22:40.189 "name": "Nvme$subsystem", 00:22:40.189 "trtype": "$TEST_TRANSPORT", 00:22:40.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.189 "adrfam": "ipv4", 00:22:40.189 "trsvcid": "$NVMF_PORT", 00:22:40.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.189 "hdgst": ${hdgst:-false}, 00:22:40.189 "ddgst": ${ddgst:-false} 00:22:40.189 }, 00:22:40.189 "method": "bdev_nvme_attach_controller" 00:22:40.189 } 00:22:40.189 EOF 00:22:40.189 )") 00:22:40.189 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.189 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:40.189 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:40.189 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:40.189 "params": { 00:22:40.189 "name": "Nvme1", 00:22:40.189 "trtype": "tcp", 00:22:40.189 "traddr": "10.0.0.2", 00:22:40.189 "adrfam": "ipv4", 00:22:40.189 "trsvcid": "4420", 00:22:40.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.189 "hdgst": false, 00:22:40.189 "ddgst": false 00:22:40.189 }, 00:22:40.189 "method": "bdev_nvme_attach_controller" 00:22:40.189 },{ 00:22:40.189 "params": { 00:22:40.189 "name": "Nvme2", 00:22:40.189 "trtype": "tcp", 00:22:40.189 "traddr": "10.0.0.2", 00:22:40.189 "adrfam": "ipv4", 00:22:40.189 "trsvcid": "4420", 00:22:40.189 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:40.189 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:40.189 "hdgst": false, 00:22:40.189 "ddgst": false 00:22:40.189 }, 00:22:40.189 "method": "bdev_nvme_attach_controller" 00:22:40.189 },{ 00:22:40.189 "params": { 00:22:40.189 "name": "Nvme3", 00:22:40.189 "trtype": "tcp", 00:22:40.189 "traddr": "10.0.0.2", 00:22:40.189 "adrfam": "ipv4", 00:22:40.189 "trsvcid": "4420", 00:22:40.189 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:40.189 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:40.189 "hdgst": false, 00:22:40.189 "ddgst": false 00:22:40.189 }, 00:22:40.189 "method": "bdev_nvme_attach_controller" 00:22:40.189 },{ 00:22:40.189 "params": { 00:22:40.189 "name": "Nvme4", 00:22:40.189 "trtype": "tcp", 00:22:40.189 "traddr": "10.0.0.2", 00:22:40.189 "adrfam": "ipv4", 00:22:40.189 "trsvcid": "4420", 00:22:40.189 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:40.189 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:40.189 "hdgst": false, 00:22:40.189 "ddgst": false 00:22:40.189 }, 00:22:40.189 "method": "bdev_nvme_attach_controller" 00:22:40.189 },{ 00:22:40.189 "params": { 00:22:40.189 "name": "Nvme5", 00:22:40.189 "trtype": "tcp", 00:22:40.189 "traddr": "10.0.0.2", 00:22:40.189 "adrfam": "ipv4", 00:22:40.189 "trsvcid": "4420", 00:22:40.189 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:40.189 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:40.189 "hdgst": false, 00:22:40.189 "ddgst": false 00:22:40.189 }, 00:22:40.189 "method": "bdev_nvme_attach_controller" 00:22:40.189 },{ 00:22:40.189 "params": { 00:22:40.189 "name": "Nvme6", 00:22:40.189 "trtype": "tcp", 00:22:40.189 "traddr": "10.0.0.2", 00:22:40.189 "adrfam": "ipv4", 00:22:40.189 "trsvcid": "4420", 00:22:40.189 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:40.189 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:40.189 "hdgst": false, 00:22:40.189 "ddgst": false 00:22:40.189 }, 00:22:40.189 "method": "bdev_nvme_attach_controller" 00:22:40.189 },{ 00:22:40.189 "params": { 00:22:40.189 "name": "Nvme7", 00:22:40.189 "trtype": "tcp", 00:22:40.189 "traddr": "10.0.0.2", 00:22:40.189 "adrfam": "ipv4", 00:22:40.189 "trsvcid": "4420", 00:22:40.189 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:40.189 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:40.189 "hdgst": false, 00:22:40.189 "ddgst": false 00:22:40.189 }, 00:22:40.189 "method": "bdev_nvme_attach_controller" 00:22:40.189 },{ 00:22:40.189 "params": { 00:22:40.189 "name": "Nvme8", 00:22:40.189 "trtype": "tcp", 00:22:40.189 "traddr": "10.0.0.2", 00:22:40.189 "adrfam": "ipv4", 00:22:40.189 "trsvcid": "4420", 00:22:40.189 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:40.189 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:40.189 "hdgst": false, 00:22:40.189 "ddgst": false 00:22:40.189 }, 00:22:40.189 "method": "bdev_nvme_attach_controller" 00:22:40.189 },{ 00:22:40.189 "params": { 00:22:40.189 "name": "Nvme9", 00:22:40.189 "trtype": "tcp", 00:22:40.189 "traddr": "10.0.0.2", 00:22:40.189 "adrfam": "ipv4", 00:22:40.189 "trsvcid": "4420", 00:22:40.189 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:40.189 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:40.189 "hdgst": false, 00:22:40.189 "ddgst": false 00:22:40.189 }, 00:22:40.189 "method": "bdev_nvme_attach_controller" 00:22:40.189 },{ 00:22:40.189 "params": { 00:22:40.190 "name": "Nvme10", 00:22:40.190 "trtype": "tcp", 00:22:40.190 "traddr": "10.0.0.2", 00:22:40.190 "adrfam": "ipv4", 00:22:40.190 "trsvcid": "4420", 00:22:40.190 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:40.190 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:40.190 "hdgst": false, 00:22:40.190 "ddgst": false 00:22:40.190 }, 00:22:40.190 "method": "bdev_nvme_attach_controller" 00:22:40.190 }' 00:22:40.190 [2024-12-08 06:26:30.302335] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:22:40.190 [2024-12-08 06:26:30.302410] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:40.449 [2024-12-08 06:26:30.376144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.449 [2024-12-08 06:26:30.436588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.351 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.351 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:42.351 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:42.351 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.351 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.351 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.351 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1111230 00:22:42.351 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:42.351 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:43.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1111230 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1111049 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.288 { 00:22:43.288 "params": { 00:22:43.288 "name": "Nvme$subsystem", 00:22:43.288 "trtype": "$TEST_TRANSPORT", 00:22:43.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.288 "adrfam": "ipv4", 00:22:43.288 "trsvcid": "$NVMF_PORT", 00:22:43.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.288 "hdgst": ${hdgst:-false}, 00:22:43.288 "ddgst": ${ddgst:-false} 00:22:43.288 }, 00:22:43.288 "method": "bdev_nvme_attach_controller" 00:22:43.288 } 00:22:43.288 EOF 00:22:43.288 )") 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.288 { 00:22:43.288 "params": { 00:22:43.288 "name": "Nvme$subsystem", 00:22:43.288 "trtype": "$TEST_TRANSPORT", 00:22:43.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.288 "adrfam": "ipv4", 00:22:43.288 "trsvcid": "$NVMF_PORT", 00:22:43.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.288 "hdgst": ${hdgst:-false}, 00:22:43.288 "ddgst": ${ddgst:-false} 00:22:43.288 }, 00:22:43.288 "method": "bdev_nvme_attach_controller" 00:22:43.288 } 00:22:43.288 EOF 00:22:43.288 )") 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.288 { 00:22:43.288 "params": { 00:22:43.288 "name": "Nvme$subsystem", 00:22:43.288 "trtype": "$TEST_TRANSPORT", 00:22:43.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.288 "adrfam": "ipv4", 00:22:43.288 "trsvcid": "$NVMF_PORT", 00:22:43.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.288 "hdgst": ${hdgst:-false}, 00:22:43.288 "ddgst": ${ddgst:-false} 00:22:43.288 }, 00:22:43.288 "method": "bdev_nvme_attach_controller" 00:22:43.288 } 00:22:43.288 EOF 00:22:43.288 )") 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.288 { 00:22:43.288 "params": { 00:22:43.288 "name": "Nvme$subsystem", 00:22:43.288 "trtype": "$TEST_TRANSPORT", 00:22:43.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.288 "adrfam": "ipv4", 00:22:43.288 "trsvcid": "$NVMF_PORT", 00:22:43.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.288 "hdgst": ${hdgst:-false}, 00:22:43.288 "ddgst": ${ddgst:-false} 00:22:43.288 }, 00:22:43.288 "method": "bdev_nvme_attach_controller" 00:22:43.288 } 00:22:43.288 EOF 00:22:43.288 )") 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.288 { 00:22:43.288 "params": { 00:22:43.288 "name": "Nvme$subsystem", 00:22:43.288 "trtype": "$TEST_TRANSPORT", 00:22:43.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.288 "adrfam": "ipv4", 00:22:43.288 "trsvcid": "$NVMF_PORT", 00:22:43.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.288 "hdgst": ${hdgst:-false}, 00:22:43.288 "ddgst": ${ddgst:-false} 00:22:43.288 }, 00:22:43.288 "method": "bdev_nvme_attach_controller" 00:22:43.288 } 00:22:43.288 EOF 00:22:43.288 )") 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.288 { 00:22:43.288 "params": { 00:22:43.288 "name": "Nvme$subsystem", 00:22:43.288 "trtype": "$TEST_TRANSPORT", 00:22:43.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.288 "adrfam": "ipv4", 00:22:43.288 "trsvcid": "$NVMF_PORT", 00:22:43.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.288 "hdgst": ${hdgst:-false}, 00:22:43.288 "ddgst": ${ddgst:-false} 00:22:43.288 }, 00:22:43.288 "method": "bdev_nvme_attach_controller" 00:22:43.288 } 00:22:43.288 EOF 00:22:43.288 )") 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.288 { 00:22:43.288 "params": { 00:22:43.288 "name": "Nvme$subsystem", 00:22:43.288 "trtype": "$TEST_TRANSPORT", 00:22:43.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.288 "adrfam": "ipv4", 00:22:43.288 "trsvcid": "$NVMF_PORT", 00:22:43.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.288 "hdgst": ${hdgst:-false}, 00:22:43.288 "ddgst": ${ddgst:-false} 00:22:43.288 }, 00:22:43.288 "method": "bdev_nvme_attach_controller" 00:22:43.288 } 00:22:43.288 EOF 00:22:43.288 )") 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.288 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.288 { 00:22:43.289 "params": { 00:22:43.289 "name": "Nvme$subsystem", 00:22:43.289 "trtype": "$TEST_TRANSPORT", 00:22:43.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.289 "adrfam": "ipv4", 00:22:43.289 "trsvcid": "$NVMF_PORT", 00:22:43.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.289 "hdgst": ${hdgst:-false}, 00:22:43.289 "ddgst": ${ddgst:-false} 00:22:43.289 }, 00:22:43.289 "method": "bdev_nvme_attach_controller" 00:22:43.289 } 00:22:43.289 EOF 00:22:43.289 )") 00:22:43.289 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.289 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.289 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.289 { 00:22:43.289 "params": { 00:22:43.289 "name": "Nvme$subsystem", 00:22:43.289 "trtype": "$TEST_TRANSPORT", 00:22:43.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.289 "adrfam": "ipv4", 00:22:43.289 "trsvcid": "$NVMF_PORT", 00:22:43.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.289 "hdgst": ${hdgst:-false}, 00:22:43.289 "ddgst": ${ddgst:-false} 00:22:43.289 }, 00:22:43.289 "method": "bdev_nvme_attach_controller" 00:22:43.289 } 00:22:43.289 EOF 00:22:43.289 )") 00:22:43.289 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.289 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.289 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.289 { 00:22:43.289 "params": { 00:22:43.289 "name": "Nvme$subsystem", 00:22:43.289 "trtype": "$TEST_TRANSPORT", 00:22:43.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.289 "adrfam": "ipv4", 00:22:43.289 "trsvcid": "$NVMF_PORT", 00:22:43.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.289 "hdgst": ${hdgst:-false}, 00:22:43.289 "ddgst": ${ddgst:-false} 00:22:43.289 }, 00:22:43.289 "method": "bdev_nvme_attach_controller" 00:22:43.289 } 00:22:43.289 EOF 00:22:43.289 )") 00:22:43.289 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:43.289 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:43.289 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:43.289 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:43.289 "params": { 00:22:43.289 "name": "Nvme1", 00:22:43.289 "trtype": "tcp", 00:22:43.289 "traddr": "10.0.0.2", 00:22:43.289 "adrfam": "ipv4", 00:22:43.289 "trsvcid": "4420", 00:22:43.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.289 "hdgst": false, 00:22:43.289 "ddgst": false 00:22:43.289 }, 00:22:43.289 "method": "bdev_nvme_attach_controller" 00:22:43.289 },{ 00:22:43.289 "params": { 00:22:43.289 "name": "Nvme2", 00:22:43.289 "trtype": "tcp", 00:22:43.289 "traddr": "10.0.0.2", 00:22:43.289 "adrfam": "ipv4", 00:22:43.289 "trsvcid": "4420", 00:22:43.289 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:43.289 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:43.289 "hdgst": false, 00:22:43.289 "ddgst": false 00:22:43.289 }, 00:22:43.289 "method": "bdev_nvme_attach_controller" 00:22:43.289 },{ 00:22:43.289 "params": { 00:22:43.289 "name": "Nvme3", 00:22:43.289 "trtype": "tcp", 00:22:43.289 "traddr": "10.0.0.2", 00:22:43.289 "adrfam": "ipv4", 00:22:43.289 "trsvcid": "4420", 00:22:43.289 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:43.289 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:43.289 "hdgst": false, 00:22:43.289 "ddgst": false 00:22:43.289 }, 00:22:43.289 "method": "bdev_nvme_attach_controller" 00:22:43.289 },{ 00:22:43.289 "params": { 00:22:43.289 "name": "Nvme4", 00:22:43.289 "trtype": "tcp", 00:22:43.289 "traddr": "10.0.0.2", 00:22:43.289 "adrfam": "ipv4", 00:22:43.289 "trsvcid": "4420", 00:22:43.289 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:43.289 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:43.289 "hdgst": false, 00:22:43.289 "ddgst": false 00:22:43.289 }, 00:22:43.289 "method": "bdev_nvme_attach_controller" 00:22:43.289 },{ 00:22:43.289 "params": { 00:22:43.289 "name": "Nvme5", 00:22:43.289 "trtype": "tcp", 00:22:43.289 "traddr": "10.0.0.2", 00:22:43.289 "adrfam": "ipv4", 00:22:43.289 "trsvcid": "4420", 00:22:43.289 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:43.289 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:43.289 "hdgst": false, 00:22:43.289 "ddgst": false 00:22:43.289 }, 00:22:43.289 "method": "bdev_nvme_attach_controller" 00:22:43.289 },{ 00:22:43.289 "params": { 00:22:43.289 "name": "Nvme6", 00:22:43.289 "trtype": "tcp", 00:22:43.289 "traddr": "10.0.0.2", 00:22:43.289 "adrfam": "ipv4", 00:22:43.289 "trsvcid": "4420", 00:22:43.289 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:43.289 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:43.289 "hdgst": false, 00:22:43.289 "ddgst": false 00:22:43.289 }, 00:22:43.289 "method": "bdev_nvme_attach_controller" 00:22:43.289 },{ 00:22:43.289 "params": { 00:22:43.289 "name": "Nvme7", 00:22:43.289 "trtype": "tcp", 00:22:43.289 "traddr": "10.0.0.2", 00:22:43.289 "adrfam": "ipv4", 00:22:43.289 "trsvcid": "4420", 00:22:43.289 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:43.289 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:43.289 "hdgst": false, 00:22:43.289 "ddgst": false 00:22:43.289 }, 00:22:43.289 "method": "bdev_nvme_attach_controller" 00:22:43.289 },{ 00:22:43.289 "params": { 00:22:43.289 "name": "Nvme8", 00:22:43.289 "trtype": "tcp", 00:22:43.289 "traddr": "10.0.0.2", 00:22:43.289 "adrfam": "ipv4", 00:22:43.289 "trsvcid": "4420", 00:22:43.289 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:43.289 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:43.289 "hdgst": false, 00:22:43.289 "ddgst": false 00:22:43.289 }, 00:22:43.289 "method": "bdev_nvme_attach_controller" 00:22:43.289 },{ 00:22:43.289 "params": { 00:22:43.289 "name": "Nvme9", 00:22:43.289 "trtype": "tcp", 00:22:43.289 "traddr": "10.0.0.2", 00:22:43.289 "adrfam": "ipv4", 00:22:43.289 "trsvcid": "4420", 00:22:43.289 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:43.289 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:43.289 "hdgst": false, 00:22:43.289 "ddgst": false 00:22:43.289 }, 00:22:43.289 "method": "bdev_nvme_attach_controller" 00:22:43.289 },{ 00:22:43.289 "params": { 00:22:43.289 "name": "Nvme10", 00:22:43.289 "trtype": "tcp", 00:22:43.289 "traddr": "10.0.0.2", 00:22:43.289 "adrfam": "ipv4", 00:22:43.289 "trsvcid": "4420", 00:22:43.289 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:43.289 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:43.289 "hdgst": false, 00:22:43.289 "ddgst": false 00:22:43.289 }, 00:22:43.289 "method": "bdev_nvme_attach_controller" 00:22:43.289 }' 00:22:43.289 [2024-12-08 06:26:33.374776] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:22:43.289 [2024-12-08 06:26:33.374862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1111643 ] 00:22:43.548 [2024-12-08 06:26:33.449796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.548 [2024-12-08 06:26:33.509582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.452 Running I/O for 1 seconds... 00:22:46.391 1673.00 IOPS, 104.56 MiB/s 00:22:46.391 Latency(us) 00:22:46.391 [2024-12-08T05:26:36.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.391 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.391 Verification LBA range: start 0x0 length 0x400 00:22:46.391 Nvme1n1 : 1.14 224.73 14.05 0.00 0.00 280990.15 19709.35 256318.58 00:22:46.391 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.391 Verification LBA range: start 0x0 length 0x400 00:22:46.391 Nvme2n1 : 1.14 228.05 14.25 0.00 0.00 270587.87 11311.03 223696.21 00:22:46.391 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.391 Verification LBA range: start 0x0 length 0x400 00:22:46.391 Nvme3n1 : 1.13 227.34 14.21 0.00 0.00 269038.55 23398.78 260978.92 00:22:46.391 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.391 Verification LBA range: start 0x0 length 0x400 00:22:46.391 Nvme4n1 : 1.13 225.62 14.10 0.00 0.00 266405.93 24660.95 273406.48 00:22:46.391 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.391 Verification LBA range: start 0x0 length 0x400 00:22:46.391 Nvme5n1 : 1.12 172.13 10.76 0.00 0.00 342455.69 23884.23 287387.50 00:22:46.391 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.391 Verification LBA range: start 0x0 length 0x400 00:22:46.392 Nvme6n1 : 1.16 220.84 13.80 0.00 0.00 262947.27 20000.62 295154.73 00:22:46.392 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.392 Verification LBA range: start 0x0 length 0x400 00:22:46.392 Nvme7n1 : 1.15 223.35 13.96 0.00 0.00 255242.81 21651.15 260978.92 00:22:46.392 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.392 Verification LBA range: start 0x0 length 0x400 00:22:46.392 Nvme8n1 : 1.15 225.85 14.12 0.00 0.00 247557.15 2305.90 278066.82 00:22:46.392 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.392 Verification LBA range: start 0x0 length 0x400 00:22:46.392 Nvme9n1 : 1.17 223.95 14.00 0.00 0.00 245422.28 4854.52 292047.83 00:22:46.392 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.392 Verification LBA range: start 0x0 length 0x400 00:22:46.392 Nvme10n1 : 1.21 265.10 16.57 0.00 0.00 205343.63 6602.15 295154.73 00:22:46.392 [2024-12-08T05:26:36.511Z] =================================================================================================================== 00:22:46.392 [2024-12-08T05:26:36.511Z] Total : 2236.95 139.81 0.00 0.00 261134.20 2305.90 295154.73 00:22:46.649 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:46.650 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:46.650 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:46.650 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:46.650 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:46.650 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:46.650 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:46.650 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:46.650 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:46.650 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:46.650 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:46.908 rmmod nvme_tcp 00:22:46.908 rmmod nvme_fabrics 00:22:46.908 rmmod nvme_keyring 00:22:46.908 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:46.908 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:46.908 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:46.908 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1111049 ']' 00:22:46.908 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1111049 00:22:46.908 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1111049 ']' 00:22:46.908 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1111049 00:22:46.908 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:46.908 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.908 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1111049 00:22:46.908 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:46.908 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:46.908 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1111049' 00:22:46.908 killing process with pid 1111049 00:22:46.908 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1111049 00:22:46.908 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1111049 00:22:47.474 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.474 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.474 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.474 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:47.474 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:47.474 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.474 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.474 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.474 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.474 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.474 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.474 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.425 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:49.425 00:22:49.425 real 0m12.425s 00:22:49.425 user 0m36.429s 00:22:49.425 sys 0m3.416s 00:22:49.425 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.425 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:49.425 ************************************ 00:22:49.425 END TEST nvmf_shutdown_tc1 00:22:49.425 ************************************ 00:22:49.425 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:49.425 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:49.425 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:49.426 ************************************ 00:22:49.426 START TEST nvmf_shutdown_tc2 00:22:49.426 ************************************ 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:49.426 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:49.426 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:49.426 Found net devices under 0000:84:00.0: cvl_0_0 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.426 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:49.427 Found net devices under 0000:84:00.1: cvl_0_1 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:49.427 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:49.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:22:49.686 00:22:49.686 --- 10.0.0.2 ping statistics --- 00:22:49.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.686 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:22:49.686 00:22:49.686 --- 10.0.0.1 ping statistics --- 00:22:49.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.686 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1112422 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1112422 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1112422 ']' 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.686 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.686 [2024-12-08 06:26:39.659973] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:22:49.686 [2024-12-08 06:26:39.660051] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.686 [2024-12-08 06:26:39.729922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.686 [2024-12-08 06:26:39.791588] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.686 [2024-12-08 06:26:39.791641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.686 [2024-12-08 06:26:39.791656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.686 [2024-12-08 06:26:39.791667] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.686 [2024-12-08 06:26:39.791677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.686 [2024-12-08 06:26:39.793368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.686 [2024-12-08 06:26:39.793432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.686 [2024-12-08 06:26:39.793509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:49.686 [2024-12-08 06:26:39.793513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.946 [2024-12-08 06:26:39.948436] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.946 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.947 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.947 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.947 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.947 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.947 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.947 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.947 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.947 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:49.947 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.947 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.947 Malloc1 00:22:49.947 [2024-12-08 06:26:40.054266] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.206 Malloc2 00:22:50.206 Malloc3 00:22:50.206 Malloc4 00:22:50.206 Malloc5 00:22:50.206 Malloc6 00:22:50.206 Malloc7 00:22:50.465 Malloc8 00:22:50.465 Malloc9 00:22:50.465 Malloc10 00:22:50.465 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.465 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:50.465 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:50.465 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.465 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1112598 00:22:50.465 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1112598 /var/tmp/bdevperf.sock 00:22:50.465 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1112598 ']' 00:22:50.465 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.465 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:50.465 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:50.465 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.465 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:50.465 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.465 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:50.465 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.465 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.465 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.465 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.465 { 00:22:50.465 "params": { 00:22:50.465 "name": "Nvme$subsystem", 00:22:50.465 "trtype": "$TEST_TRANSPORT", 00:22:50.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.465 "adrfam": "ipv4", 00:22:50.465 "trsvcid": "$NVMF_PORT", 00:22:50.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.465 "hdgst": ${hdgst:-false}, 00:22:50.465 "ddgst": ${ddgst:-false} 00:22:50.465 }, 00:22:50.465 "method": "bdev_nvme_attach_controller" 00:22:50.465 } 00:22:50.465 EOF 00:22:50.465 )") 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.466 { 00:22:50.466 "params": { 00:22:50.466 "name": "Nvme$subsystem", 00:22:50.466 "trtype": "$TEST_TRANSPORT", 00:22:50.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.466 "adrfam": "ipv4", 00:22:50.466 "trsvcid": "$NVMF_PORT", 00:22:50.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.466 "hdgst": ${hdgst:-false}, 00:22:50.466 "ddgst": ${ddgst:-false} 00:22:50.466 }, 00:22:50.466 "method": "bdev_nvme_attach_controller" 00:22:50.466 } 00:22:50.466 EOF 00:22:50.466 )") 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.466 { 00:22:50.466 "params": { 00:22:50.466 "name": "Nvme$subsystem", 00:22:50.466 "trtype": "$TEST_TRANSPORT", 00:22:50.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.466 "adrfam": "ipv4", 00:22:50.466 "trsvcid": "$NVMF_PORT", 00:22:50.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.466 "hdgst": ${hdgst:-false}, 00:22:50.466 "ddgst": ${ddgst:-false} 00:22:50.466 }, 00:22:50.466 "method": "bdev_nvme_attach_controller" 00:22:50.466 } 00:22:50.466 EOF 00:22:50.466 )") 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.466 { 00:22:50.466 "params": { 00:22:50.466 "name": "Nvme$subsystem", 00:22:50.466 "trtype": "$TEST_TRANSPORT", 00:22:50.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.466 "adrfam": "ipv4", 00:22:50.466 "trsvcid": "$NVMF_PORT", 00:22:50.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.466 "hdgst": ${hdgst:-false}, 00:22:50.466 "ddgst": ${ddgst:-false} 00:22:50.466 }, 00:22:50.466 "method": "bdev_nvme_attach_controller" 00:22:50.466 } 00:22:50.466 EOF 00:22:50.466 )") 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.466 { 00:22:50.466 "params": { 00:22:50.466 "name": "Nvme$subsystem", 00:22:50.466 "trtype": "$TEST_TRANSPORT", 00:22:50.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.466 "adrfam": "ipv4", 00:22:50.466 "trsvcid": "$NVMF_PORT", 00:22:50.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.466 "hdgst": ${hdgst:-false}, 00:22:50.466 "ddgst": ${ddgst:-false} 00:22:50.466 }, 00:22:50.466 "method": "bdev_nvme_attach_controller" 00:22:50.466 } 00:22:50.466 EOF 00:22:50.466 )") 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.466 { 00:22:50.466 "params": { 00:22:50.466 "name": "Nvme$subsystem", 00:22:50.466 "trtype": "$TEST_TRANSPORT", 00:22:50.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.466 "adrfam": "ipv4", 00:22:50.466 "trsvcid": "$NVMF_PORT", 00:22:50.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.466 "hdgst": ${hdgst:-false}, 00:22:50.466 "ddgst": ${ddgst:-false} 00:22:50.466 }, 00:22:50.466 "method": "bdev_nvme_attach_controller" 00:22:50.466 } 00:22:50.466 EOF 00:22:50.466 )") 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.466 { 00:22:50.466 "params": { 00:22:50.466 "name": "Nvme$subsystem", 00:22:50.466 "trtype": "$TEST_TRANSPORT", 00:22:50.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.466 "adrfam": "ipv4", 00:22:50.466 "trsvcid": "$NVMF_PORT", 00:22:50.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.466 "hdgst": ${hdgst:-false}, 00:22:50.466 "ddgst": ${ddgst:-false} 00:22:50.466 }, 00:22:50.466 "method": "bdev_nvme_attach_controller" 00:22:50.466 } 00:22:50.466 EOF 00:22:50.466 )") 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.466 { 00:22:50.466 "params": { 00:22:50.466 "name": "Nvme$subsystem", 00:22:50.466 "trtype": "$TEST_TRANSPORT", 00:22:50.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.466 "adrfam": "ipv4", 00:22:50.466 "trsvcid": "$NVMF_PORT", 00:22:50.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.466 "hdgst": ${hdgst:-false}, 00:22:50.466 "ddgst": ${ddgst:-false} 00:22:50.466 }, 00:22:50.466 "method": "bdev_nvme_attach_controller" 00:22:50.466 } 00:22:50.466 EOF 00:22:50.466 )") 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.466 { 00:22:50.466 "params": { 00:22:50.466 "name": "Nvme$subsystem", 00:22:50.466 "trtype": "$TEST_TRANSPORT", 00:22:50.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.466 "adrfam": "ipv4", 00:22:50.466 "trsvcid": "$NVMF_PORT", 00:22:50.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.466 "hdgst": ${hdgst:-false}, 00:22:50.466 "ddgst": ${ddgst:-false} 00:22:50.466 }, 00:22:50.466 "method": "bdev_nvme_attach_controller" 00:22:50.466 } 00:22:50.466 EOF 00:22:50.466 )") 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.466 { 00:22:50.466 "params": { 00:22:50.466 "name": "Nvme$subsystem", 00:22:50.466 "trtype": "$TEST_TRANSPORT", 00:22:50.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.466 "adrfam": "ipv4", 00:22:50.466 "trsvcid": "$NVMF_PORT", 00:22:50.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.466 "hdgst": ${hdgst:-false}, 00:22:50.466 "ddgst": ${ddgst:-false} 00:22:50.466 }, 00:22:50.466 "method": "bdev_nvme_attach_controller" 00:22:50.466 } 00:22:50.466 EOF 00:22:50.466 )") 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:50.466 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:50.466 "params": { 00:22:50.466 "name": "Nvme1", 00:22:50.466 "trtype": "tcp", 00:22:50.466 "traddr": "10.0.0.2", 00:22:50.466 "adrfam": "ipv4", 00:22:50.466 "trsvcid": "4420", 00:22:50.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.466 "hdgst": false, 00:22:50.466 "ddgst": false 00:22:50.466 }, 00:22:50.466 "method": "bdev_nvme_attach_controller" 00:22:50.466 },{ 00:22:50.466 "params": { 00:22:50.466 "name": "Nvme2", 00:22:50.466 "trtype": "tcp", 00:22:50.466 "traddr": "10.0.0.2", 00:22:50.466 "adrfam": "ipv4", 00:22:50.466 "trsvcid": "4420", 00:22:50.466 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:50.466 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:50.466 "hdgst": false, 00:22:50.466 "ddgst": false 00:22:50.466 }, 00:22:50.466 "method": "bdev_nvme_attach_controller" 00:22:50.466 },{ 00:22:50.466 "params": { 00:22:50.466 "name": "Nvme3", 00:22:50.466 "trtype": "tcp", 00:22:50.466 "traddr": "10.0.0.2", 00:22:50.466 "adrfam": "ipv4", 00:22:50.466 "trsvcid": "4420", 00:22:50.466 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:50.466 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:50.466 "hdgst": false, 00:22:50.466 "ddgst": false 00:22:50.466 }, 00:22:50.467 "method": "bdev_nvme_attach_controller" 00:22:50.467 },{ 00:22:50.467 "params": { 00:22:50.467 "name": "Nvme4", 00:22:50.467 "trtype": "tcp", 00:22:50.467 "traddr": "10.0.0.2", 00:22:50.467 "adrfam": "ipv4", 00:22:50.467 "trsvcid": "4420", 00:22:50.467 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:50.467 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:50.467 "hdgst": false, 00:22:50.467 "ddgst": false 00:22:50.467 }, 00:22:50.467 "method": "bdev_nvme_attach_controller" 00:22:50.467 },{ 00:22:50.467 "params": { 00:22:50.467 "name": "Nvme5", 00:22:50.467 "trtype": "tcp", 00:22:50.467 "traddr": "10.0.0.2", 00:22:50.467 "adrfam": "ipv4", 00:22:50.467 "trsvcid": "4420", 00:22:50.467 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:50.467 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:50.467 "hdgst": false, 00:22:50.467 "ddgst": false 00:22:50.467 }, 00:22:50.467 "method": "bdev_nvme_attach_controller" 00:22:50.467 },{ 00:22:50.467 "params": { 00:22:50.467 "name": "Nvme6", 00:22:50.467 "trtype": "tcp", 00:22:50.467 "traddr": "10.0.0.2", 00:22:50.467 "adrfam": "ipv4", 00:22:50.467 "trsvcid": "4420", 00:22:50.467 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:50.467 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:50.467 "hdgst": false, 00:22:50.467 "ddgst": false 00:22:50.467 }, 00:22:50.467 "method": "bdev_nvme_attach_controller" 00:22:50.467 },{ 00:22:50.467 "params": { 00:22:50.467 "name": "Nvme7", 00:22:50.467 "trtype": "tcp", 00:22:50.467 "traddr": "10.0.0.2", 00:22:50.467 "adrfam": "ipv4", 00:22:50.467 "trsvcid": "4420", 00:22:50.467 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:50.467 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:50.467 "hdgst": false, 00:22:50.467 "ddgst": false 00:22:50.467 }, 00:22:50.467 "method": "bdev_nvme_attach_controller" 00:22:50.467 },{ 00:22:50.467 "params": { 00:22:50.467 "name": "Nvme8", 00:22:50.467 "trtype": "tcp", 00:22:50.467 "traddr": "10.0.0.2", 00:22:50.467 "adrfam": "ipv4", 00:22:50.467 "trsvcid": "4420", 00:22:50.467 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:50.467 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:50.467 "hdgst": false, 00:22:50.467 "ddgst": false 00:22:50.467 }, 00:22:50.467 "method": "bdev_nvme_attach_controller" 00:22:50.467 },{ 00:22:50.467 "params": { 00:22:50.467 "name": "Nvme9", 00:22:50.467 "trtype": "tcp", 00:22:50.467 "traddr": "10.0.0.2", 00:22:50.467 "adrfam": "ipv4", 00:22:50.467 "trsvcid": "4420", 00:22:50.467 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:50.467 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:50.467 "hdgst": false, 00:22:50.467 "ddgst": false 00:22:50.467 }, 00:22:50.467 "method": "bdev_nvme_attach_controller" 00:22:50.467 },{ 00:22:50.467 "params": { 00:22:50.467 "name": "Nvme10", 00:22:50.467 "trtype": "tcp", 00:22:50.467 "traddr": "10.0.0.2", 00:22:50.467 "adrfam": "ipv4", 00:22:50.467 "trsvcid": "4420", 00:22:50.467 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:50.467 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:50.467 "hdgst": false, 00:22:50.467 "ddgst": false 00:22:50.467 }, 00:22:50.467 "method": "bdev_nvme_attach_controller" 00:22:50.467 }' 00:22:50.467 [2024-12-08 06:26:40.573644] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:22:50.467 [2024-12-08 06:26:40.573759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1112598 ] 00:22:50.725 [2024-12-08 06:26:40.649167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.725 [2024-12-08 06:26:40.709828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.626 Running I/O for 10 seconds... 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:52.626 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:52.906 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:52.906 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:52.906 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:52.906 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:52.906 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.906 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.906 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.906 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:52.906 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:52.906 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:52.906 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:52.906 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:52.906 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1112598 00:22:52.906 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1112598 ']' 00:22:52.906 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1112598 00:22:52.906 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:52.906 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.906 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1112598 00:22:52.906 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:52.906 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:52.906 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1112598' 00:22:52.906 killing process with pid 1112598 00:22:52.906 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1112598 00:22:52.906 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1112598 00:22:53.165 Received shutdown signal, test time was about 0.834266 seconds 00:22:53.165 00:22:53.165 Latency(us) 00:22:53.165 [2024-12-08T05:26:43.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.165 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.165 Verification LBA range: start 0x0 length 0x400 00:22:53.165 Nvme1n1 : 0.82 234.11 14.63 0.00 0.00 269168.39 22427.88 262532.36 00:22:53.165 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.165 Verification LBA range: start 0x0 length 0x400 00:22:53.165 Nvme2n1 : 0.81 236.89 14.81 0.00 0.00 259666.43 20971.52 264085.81 00:22:53.165 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.165 Verification LBA range: start 0x0 length 0x400 00:22:53.165 Nvme3n1 : 0.80 239.00 14.94 0.00 0.00 250666.86 19806.44 250104.79 00:22:53.165 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.165 Verification LBA range: start 0x0 length 0x400 00:22:53.165 Nvme4n1 : 0.80 239.90 14.99 0.00 0.00 243641.71 33787.45 248551.35 00:22:53.165 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.165 Verification LBA range: start 0x0 length 0x400 00:22:53.165 Nvme5n1 : 0.82 233.28 14.58 0.00 0.00 245036.88 25826.04 265639.25 00:22:53.165 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.165 Verification LBA range: start 0x0 length 0x400 00:22:53.165 Nvme6n1 : 0.83 230.38 14.40 0.00 0.00 242155.01 20971.52 268746.15 00:22:53.165 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.165 Verification LBA range: start 0x0 length 0x400 00:22:53.165 Nvme7n1 : 0.83 231.37 14.46 0.00 0.00 234737.97 20583.16 267192.70 00:22:53.165 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.165 Verification LBA range: start 0x0 length 0x400 00:22:53.165 Nvme8n1 : 0.82 234.82 14.68 0.00 0.00 223514.17 17670.45 240784.12 00:22:53.165 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.165 Verification LBA range: start 0x0 length 0x400 00:22:53.165 Nvme9n1 : 0.78 164.12 10.26 0.00 0.00 309094.78 33787.45 284280.60 00:22:53.165 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.165 Verification LBA range: start 0x0 length 0x400 00:22:53.165 Nvme10n1 : 0.79 162.50 10.16 0.00 0.00 303894.76 21262.79 295154.73 00:22:53.165 [2024-12-08T05:26:43.284Z] =================================================================================================================== 00:22:53.165 [2024-12-08T05:26:43.284Z] Total : 2206.36 137.90 0.00 0.00 254705.05 17670.45 295154.73 00:22:53.424 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:54.358 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1112422 00:22:54.358 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:54.358 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:54.358 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.359 rmmod nvme_tcp 00:22:54.359 rmmod nvme_fabrics 00:22:54.359 rmmod nvme_keyring 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1112422 ']' 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1112422 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1112422 ']' 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1112422 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1112422 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1112422' 00:22:54.359 killing process with pid 1112422 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1112422 00:22:54.359 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1112422 00:22:54.928 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:54.928 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:54.928 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:54.928 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:54.928 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:54.928 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:54.928 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:54.928 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:54.928 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:54.928 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.928 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.928 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:57.466 00:22:57.466 real 0m7.593s 00:22:57.466 user 0m23.082s 00:22:57.466 sys 0m1.484s 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:57.466 ************************************ 00:22:57.466 END TEST nvmf_shutdown_tc2 00:22:57.466 ************************************ 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:57.466 ************************************ 00:22:57.466 START TEST nvmf_shutdown_tc3 00:22:57.466 ************************************ 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:57.466 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.466 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:57.467 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:57.467 Found net devices under 0000:84:00.0: cvl_0_0 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:57.467 Found net devices under 0000:84:00.1: cvl_0_1 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:57.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:22:57.467 00:22:57.467 --- 10.0.0.2 ping statistics --- 00:22:57.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.467 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:22:57.467 00:22:57.467 --- 10.0.0.1 ping statistics --- 00:22:57.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.467 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1113510 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1113510 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1113510 ']' 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.467 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.467 [2024-12-08 06:26:47.299556] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:22:57.467 [2024-12-08 06:26:47.299646] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.467 [2024-12-08 06:26:47.370398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.467 [2024-12-08 06:26:47.426192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.467 [2024-12-08 06:26:47.426254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.467 [2024-12-08 06:26:47.426282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.467 [2024-12-08 06:26:47.426293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.468 [2024-12-08 06:26:47.426303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.468 [2024-12-08 06:26:47.427904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.468 [2024-12-08 06:26:47.427970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.468 [2024-12-08 06:26:47.428032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:57.468 [2024-12-08 06:26:47.428035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.468 [2024-12-08 06:26:47.565691] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.468 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.727 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.727 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.727 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.727 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.727 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.727 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.727 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.727 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.727 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.727 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.727 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.727 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.727 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.727 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.727 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:57.727 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.727 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.727 Malloc1 00:22:57.727 [2024-12-08 06:26:47.655468] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.727 Malloc2 00:22:57.727 Malloc3 00:22:57.727 Malloc4 00:22:57.727 Malloc5 00:22:57.986 Malloc6 00:22:57.986 Malloc7 00:22:57.986 Malloc8 00:22:57.986 Malloc9 00:22:57.986 Malloc10 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1113684 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1113684 /var/tmp/bdevperf.sock 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1113684 ']' 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.246 { 00:22:58.246 "params": { 00:22:58.246 "name": "Nvme$subsystem", 00:22:58.246 "trtype": "$TEST_TRANSPORT", 00:22:58.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.246 "adrfam": "ipv4", 00:22:58.246 "trsvcid": "$NVMF_PORT", 00:22:58.246 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.246 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.246 "hdgst": ${hdgst:-false}, 00:22:58.246 "ddgst": ${ddgst:-false} 00:22:58.246 }, 00:22:58.246 "method": "bdev_nvme_attach_controller" 00:22:58.246 } 00:22:58.246 EOF 00:22:58.246 )") 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.246 { 00:22:58.246 "params": { 00:22:58.246 "name": "Nvme$subsystem", 00:22:58.246 "trtype": "$TEST_TRANSPORT", 00:22:58.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.246 "adrfam": "ipv4", 00:22:58.246 "trsvcid": "$NVMF_PORT", 00:22:58.246 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.246 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.246 "hdgst": ${hdgst:-false}, 00:22:58.246 "ddgst": ${ddgst:-false} 00:22:58.246 }, 00:22:58.246 "method": "bdev_nvme_attach_controller" 00:22:58.246 } 00:22:58.246 EOF 00:22:58.246 )") 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.246 { 00:22:58.246 "params": { 00:22:58.246 "name": "Nvme$subsystem", 00:22:58.246 "trtype": "$TEST_TRANSPORT", 00:22:58.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.246 "adrfam": "ipv4", 00:22:58.246 "trsvcid": "$NVMF_PORT", 00:22:58.246 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.246 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.246 "hdgst": ${hdgst:-false}, 00:22:58.246 "ddgst": ${ddgst:-false} 00:22:58.246 }, 00:22:58.246 "method": "bdev_nvme_attach_controller" 00:22:58.246 } 00:22:58.246 EOF 00:22:58.246 )") 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.246 { 00:22:58.246 "params": { 00:22:58.246 "name": "Nvme$subsystem", 00:22:58.246 "trtype": "$TEST_TRANSPORT", 00:22:58.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.246 "adrfam": "ipv4", 00:22:58.246 "trsvcid": "$NVMF_PORT", 00:22:58.246 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.246 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.246 "hdgst": ${hdgst:-false}, 00:22:58.246 "ddgst": ${ddgst:-false} 00:22:58.246 }, 00:22:58.246 "method": "bdev_nvme_attach_controller" 00:22:58.246 } 00:22:58.246 EOF 00:22:58.246 )") 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.246 { 00:22:58.246 "params": { 00:22:58.246 "name": "Nvme$subsystem", 00:22:58.246 "trtype": "$TEST_TRANSPORT", 00:22:58.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.246 "adrfam": "ipv4", 00:22:58.246 "trsvcid": "$NVMF_PORT", 00:22:58.246 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.246 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.246 "hdgst": ${hdgst:-false}, 00:22:58.246 "ddgst": ${ddgst:-false} 00:22:58.246 }, 00:22:58.246 "method": "bdev_nvme_attach_controller" 00:22:58.246 } 00:22:58.246 EOF 00:22:58.246 )") 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.246 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.246 { 00:22:58.246 "params": { 00:22:58.246 "name": "Nvme$subsystem", 00:22:58.246 "trtype": "$TEST_TRANSPORT", 00:22:58.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.246 "adrfam": "ipv4", 00:22:58.246 "trsvcid": "$NVMF_PORT", 00:22:58.246 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.246 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.247 "hdgst": ${hdgst:-false}, 00:22:58.247 "ddgst": ${ddgst:-false} 00:22:58.247 }, 00:22:58.247 "method": "bdev_nvme_attach_controller" 00:22:58.247 } 00:22:58.247 EOF 00:22:58.247 )") 00:22:58.247 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.247 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.247 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.247 { 00:22:58.247 "params": { 00:22:58.247 "name": "Nvme$subsystem", 00:22:58.247 "trtype": "$TEST_TRANSPORT", 00:22:58.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.247 "adrfam": "ipv4", 00:22:58.247 "trsvcid": "$NVMF_PORT", 00:22:58.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.247 "hdgst": ${hdgst:-false}, 00:22:58.247 "ddgst": ${ddgst:-false} 00:22:58.247 }, 00:22:58.247 "method": "bdev_nvme_attach_controller" 00:22:58.247 } 00:22:58.247 EOF 00:22:58.247 )") 00:22:58.247 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.247 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.247 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.247 { 00:22:58.247 "params": { 00:22:58.247 "name": "Nvme$subsystem", 00:22:58.247 "trtype": "$TEST_TRANSPORT", 00:22:58.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.247 "adrfam": "ipv4", 00:22:58.247 "trsvcid": "$NVMF_PORT", 00:22:58.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.247 "hdgst": ${hdgst:-false}, 00:22:58.247 "ddgst": ${ddgst:-false} 00:22:58.247 }, 00:22:58.247 "method": "bdev_nvme_attach_controller" 00:22:58.247 } 00:22:58.247 EOF 00:22:58.247 )") 00:22:58.247 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.247 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.247 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.247 { 00:22:58.247 "params": { 00:22:58.247 "name": "Nvme$subsystem", 00:22:58.247 "trtype": "$TEST_TRANSPORT", 00:22:58.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.247 "adrfam": "ipv4", 00:22:58.247 "trsvcid": "$NVMF_PORT", 00:22:58.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.247 "hdgst": ${hdgst:-false}, 00:22:58.247 "ddgst": ${ddgst:-false} 00:22:58.247 }, 00:22:58.247 "method": "bdev_nvme_attach_controller" 00:22:58.247 } 00:22:58.247 EOF 00:22:58.247 )") 00:22:58.247 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.247 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.247 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.247 { 00:22:58.247 "params": { 00:22:58.247 "name": "Nvme$subsystem", 00:22:58.247 "trtype": "$TEST_TRANSPORT", 00:22:58.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.247 "adrfam": "ipv4", 00:22:58.247 "trsvcid": "$NVMF_PORT", 00:22:58.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.247 "hdgst": ${hdgst:-false}, 00:22:58.247 "ddgst": ${ddgst:-false} 00:22:58.247 }, 00:22:58.247 "method": "bdev_nvme_attach_controller" 00:22:58.247 } 00:22:58.247 EOF 00:22:58.247 )") 00:22:58.247 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.247 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:58.247 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:58.247 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:58.247 "params": { 00:22:58.247 "name": "Nvme1", 00:22:58.247 "trtype": "tcp", 00:22:58.247 "traddr": "10.0.0.2", 00:22:58.247 "adrfam": "ipv4", 00:22:58.247 "trsvcid": "4420", 00:22:58.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:58.247 "hdgst": false, 00:22:58.247 "ddgst": false 00:22:58.247 }, 00:22:58.247 "method": "bdev_nvme_attach_controller" 00:22:58.247 },{ 00:22:58.247 "params": { 00:22:58.247 "name": "Nvme2", 00:22:58.247 "trtype": "tcp", 00:22:58.247 "traddr": "10.0.0.2", 00:22:58.247 "adrfam": "ipv4", 00:22:58.247 "trsvcid": "4420", 00:22:58.247 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:58.247 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:58.247 "hdgst": false, 00:22:58.247 "ddgst": false 00:22:58.247 }, 00:22:58.247 "method": "bdev_nvme_attach_controller" 00:22:58.247 },{ 00:22:58.247 "params": { 00:22:58.247 "name": "Nvme3", 00:22:58.247 "trtype": "tcp", 00:22:58.247 "traddr": "10.0.0.2", 00:22:58.247 "adrfam": "ipv4", 00:22:58.247 "trsvcid": "4420", 00:22:58.247 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:58.247 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:58.247 "hdgst": false, 00:22:58.247 "ddgst": false 00:22:58.247 }, 00:22:58.247 "method": "bdev_nvme_attach_controller" 00:22:58.247 },{ 00:22:58.247 "params": { 00:22:58.247 "name": "Nvme4", 00:22:58.247 "trtype": "tcp", 00:22:58.247 "traddr": "10.0.0.2", 00:22:58.247 "adrfam": "ipv4", 00:22:58.247 "trsvcid": "4420", 00:22:58.247 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:58.247 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:58.247 "hdgst": false, 00:22:58.247 "ddgst": false 00:22:58.247 }, 00:22:58.247 "method": "bdev_nvme_attach_controller" 00:22:58.247 },{ 00:22:58.247 "params": { 00:22:58.247 "name": "Nvme5", 00:22:58.247 "trtype": "tcp", 00:22:58.247 "traddr": "10.0.0.2", 00:22:58.247 "adrfam": "ipv4", 00:22:58.247 "trsvcid": "4420", 00:22:58.247 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:58.247 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:58.247 "hdgst": false, 00:22:58.247 "ddgst": false 00:22:58.247 }, 00:22:58.247 "method": "bdev_nvme_attach_controller" 00:22:58.247 },{ 00:22:58.247 "params": { 00:22:58.247 "name": "Nvme6", 00:22:58.247 "trtype": "tcp", 00:22:58.247 "traddr": "10.0.0.2", 00:22:58.247 "adrfam": "ipv4", 00:22:58.247 "trsvcid": "4420", 00:22:58.247 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:58.247 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:58.247 "hdgst": false, 00:22:58.247 "ddgst": false 00:22:58.247 }, 00:22:58.247 "method": "bdev_nvme_attach_controller" 00:22:58.247 },{ 00:22:58.247 "params": { 00:22:58.247 "name": "Nvme7", 00:22:58.247 "trtype": "tcp", 00:22:58.247 "traddr": "10.0.0.2", 00:22:58.247 "adrfam": "ipv4", 00:22:58.247 "trsvcid": "4420", 00:22:58.247 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:58.247 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:58.247 "hdgst": false, 00:22:58.247 "ddgst": false 00:22:58.247 }, 00:22:58.247 "method": "bdev_nvme_attach_controller" 00:22:58.247 },{ 00:22:58.247 "params": { 00:22:58.247 "name": "Nvme8", 00:22:58.247 "trtype": "tcp", 00:22:58.247 "traddr": "10.0.0.2", 00:22:58.247 "adrfam": "ipv4", 00:22:58.247 "trsvcid": "4420", 00:22:58.247 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:58.247 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:58.247 "hdgst": false, 00:22:58.247 "ddgst": false 00:22:58.247 }, 00:22:58.247 "method": "bdev_nvme_attach_controller" 00:22:58.247 },{ 00:22:58.247 "params": { 00:22:58.247 "name": "Nvme9", 00:22:58.247 "trtype": "tcp", 00:22:58.247 "traddr": "10.0.0.2", 00:22:58.247 "adrfam": "ipv4", 00:22:58.247 "trsvcid": "4420", 00:22:58.247 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:58.247 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:58.247 "hdgst": false, 00:22:58.247 "ddgst": false 00:22:58.247 }, 00:22:58.247 "method": "bdev_nvme_attach_controller" 00:22:58.247 },{ 00:22:58.247 "params": { 00:22:58.247 "name": "Nvme10", 00:22:58.247 "trtype": "tcp", 00:22:58.247 "traddr": "10.0.0.2", 00:22:58.247 "adrfam": "ipv4", 00:22:58.247 "trsvcid": "4420", 00:22:58.247 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:58.247 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:58.247 "hdgst": false, 00:22:58.247 "ddgst": false 00:22:58.247 }, 00:22:58.247 "method": "bdev_nvme_attach_controller" 00:22:58.247 }' 00:22:58.248 [2024-12-08 06:26:48.185794] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:22:58.248 [2024-12-08 06:26:48.185870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1113684 ] 00:22:58.248 [2024-12-08 06:26:48.257692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.248 [2024-12-08 06:26:48.319949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.146 Running I/O for 10 seconds... 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=85 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 85 -ge 100 ']' 00:23:00.405 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=152 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 152 -ge 100 ']' 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1113510 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1113510 ']' 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1113510 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1113510 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1113510' 00:23:00.680 killing process with pid 1113510 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1113510 00:23:00.680 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1113510 00:23:00.680 [2024-12-08 06:26:50.710383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.680 [2024-12-08 06:26:50.710987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.711229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da310 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-08 06:26:50.713258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:00.681 the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-08 06:26:50.713292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.681 the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.681 [2024-12-08 06:26:50.713320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.681 [2024-12-08 06:26:50.713333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.681 [2024-12-08 06:26:50.713346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.681 [2024-12-08 06:26:50.713360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.681 [2024-12-08 06:26:50.713373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.681 [2024-12-08 06:26:50.713386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9910 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.681 [2024-12-08 06:26:50.713821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.682 [2024-12-08 06:26:50.713833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.682 [2024-12-08 06:26:50.713845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.682 [2024-12-08 06:26:50.713857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.682 [2024-12-08 06:26:50.713869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.682 [2024-12-08 06:26:50.713882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.682 [2024-12-08 06:26:50.713894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.682 [2024-12-08 06:26:50.713909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.682 [2024-12-08 06:26:50.713922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.682 [2024-12-08 06:26:50.713935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc0f0 is same with the state(6) to be set 00:23:00.682 [2024-12-08 06:26:50.715061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.715982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.715996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.716016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.716030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.716045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.716059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.716080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.716094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.716110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.716123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.682 [2024-12-08 06:26:50.716139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.682 [2024-12-08 06:26:50.716152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.683 [2024-12-08 06:26:50.716827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.683 [2024-12-08 06:26:50.716844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.683 [2024-12-08 06:26:50.716858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.683 [2024-12-08 06:26:50.716883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.683 [2024-12-08 06:26:50.716897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.683 [2024-12-08 06:26:50.716913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:12[2024-12-08 06:26:50.716914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 the state(6) to be set 00:23:00.683 [2024-12-08 06:26:50.716933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-08 06:26:50.716933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 the state(6) to be set 00:23:00.683 [2024-12-08 06:26:50.716949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.683 [2024-12-08 06:26:50.716952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.683 [2024-12-08 06:26:50.716966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.716974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.683 [2024-12-08 06:26:50.716982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.716987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.683 [2024-12-08 06:26:50.716996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.717000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.683 [2024-12-08 06:26:50.717021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with [2024-12-08 06:26:50.717022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:12the state(6) to be set 00:23:00.683 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.717036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with [2024-12-08 06:26:50.717037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:00.683 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.717050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.683 [2024-12-08 06:26:50.717054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.717063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.683 [2024-12-08 06:26:50.717068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.717077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.683 [2024-12-08 06:26:50.717084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.683 [2024-12-08 06:26:50.717090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.683 [2024-12-08 06:26:50.717098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.683 [2024-12-08 06:26:50.717102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.683 [2024-12-08 06:26:50.717115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-12-08 06:26:50.717290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-12-08 06:26:50.717304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-12-08 06:26:50.717329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-12-08 06:26:50.717341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:12[2024-12-08 06:26:50.717354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-08 06:26:50.717368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-12-08 06:26:50.717395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-12-08 06:26:50.717409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with [2024-12-08 06:26:50.717421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:12the state(6) to be set 00:23:00.684 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-12-08 06:26:50.717436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-12-08 06:26:50.717449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-12-08 06:26:50.717462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-12-08 06:26:50.717476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-12-08 06:26:50.717489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-12-08 06:26:50.717502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with [2024-12-08 06:26:50.717515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:12the state(6) to be set 00:23:00.684 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-12-08 06:26:50.717529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with [2024-12-08 06:26:50.717530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:00.684 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-12-08 06:26:50.717543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dacb0 is same with the state(6) to be set 00:23:00.684 [2024-12-08 06:26:50.717547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-12-08 06:26:50.717561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-12-08 06:26:50.717576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-12-08 06:26:50.717590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-12-08 06:26:50.717606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-12-08 06:26:50.717623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-12-08 06:26:50.717639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-12-08 06:26:50.717653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-12-08 06:26:50.717668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-12-08 06:26:50.717682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-12-08 06:26:50.717697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-12-08 06:26:50.717712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-12-08 06:26:50.717739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-12-08 06:26:50.717755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-12-08 06:26:50.717770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-12-08 06:26:50.717784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-12-08 06:26:50.717799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-12-08 06:26:50.717813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-12-08 06:26:50.717829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-12-08 06:26:50.717842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-12-08 06:26:50.717858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.684 [2024-12-08 06:26:50.717871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.684 [2024-12-08 06:26:50.717887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.717900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.717916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.717930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.717945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.717959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.717974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.717988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.685 [2024-12-08 06:26:50.718962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 [2024-12-08 06:26:50.718976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with [2024-12-08 06:26:50.718978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1the state(6) to be set 00:23:00.685 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.685 [2024-12-08 06:26:50.718994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-08 06:26:50.718993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.685 the state(6) to be set 00:23:00.685 [2024-12-08 06:26:50.719009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.685 [2024-12-08 06:26:50.719011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-12-08 06:26:50.719022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-12-08 06:26:50.719035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-12-08 06:26:50.719048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-12-08 06:26:50.719061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1[2024-12-08 06:26:50.719076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-08 06:26:50.719091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-12-08 06:26:50.719118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-12-08 06:26:50.719140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1[2024-12-08 06:26:50.719141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-08 06:26:50.719156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-12-08 06:26:50.719182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-12-08 06:26:50.719194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-12-08 06:26:50.719207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-12-08 06:26:50.719220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.686 [2024-12-08 06:26:50.719232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.686 [2024-12-08 06:26:50.719252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.719818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db1a0 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.720688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.720717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.720748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.720764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.720776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.720788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.720801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.720816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.686 [2024-12-08 06:26:50.720828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.720840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.720857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.720881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.720893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.720905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.720917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.720930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.720942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.720954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.720966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.720978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.720990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.721529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b060 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.722992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.723015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.723027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.723039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.723051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.723062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.723074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.723086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.723098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.723117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.687 [2024-12-08 06:26:50.723130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.723526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6b530 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.724057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:00.688 [2024-12-08 06:26:50.724099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:00.688 [2024-12-08 06:26:50.724159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d9480 (9): Bad file descriptor 00:23:00.688 [2024-12-08 06:26:50.724186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d9910 (9): Bad file descriptor 00:23:00.688 [2024-12-08 06:26:50.724264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.688 [2024-12-08 06:26:50.724285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.688 [2024-12-08 06:26:50.724300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.688 [2024-12-08 06:26:50.724313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.688 [2024-12-08 06:26:50.724326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.688 [2024-12-08 06:26:50.724339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.688 [2024-12-08 06:26:50.724352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.688 [2024-12-08 06:26:50.724365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.688 [2024-12-08 06:26:50.724377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d5ad0 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.724425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.688 [2024-12-08 06:26:50.724445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.688 [2024-12-08 06:26:50.724459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.688 [2024-12-08 06:26:50.724472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.688 [2024-12-08 06:26:50.724486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.688 [2024-12-08 06:26:50.724498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.688 [2024-12-08 06:26:50.724511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.688 [2024-12-08 06:26:50.724529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.688 [2024-12-08 06:26:50.724548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d6200 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.724594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.688 [2024-12-08 06:26:50.724614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.688 [2024-12-08 06:26:50.724628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.688 [2024-12-08 06:26:50.724641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.688 [2024-12-08 06:26:50.724654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.688 [2024-12-08 06:26:50.724667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.688 [2024-12-08 06:26:50.724680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.688 [2024-12-08 06:26:50.724694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.688 [2024-12-08 06:26:50.724706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2803820 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.724762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.688 [2024-12-08 06:26:50.724783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.688 [2024-12-08 06:26:50.724797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.688 [2024-12-08 06:26:50.724810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.688 [2024-12-08 06:26:50.724824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.688 [2024-12-08 06:26:50.724836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.688 [2024-12-08 06:26:50.724850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.688 [2024-12-08 06:26:50.724863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.688 [2024-12-08 06:26:50.724875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2834c10 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.724924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.724950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.724964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.724977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.724990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.725002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.725028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.725041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.725054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.725067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.688 [2024-12-08 06:26:50.725081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.725739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db520 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.726562] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:00.689 [2024-12-08 06:26:50.726704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.689 [2024-12-08 06:26:50.726742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d9910 with addr=10.0.0.2, port=4420 00:23:00.689 [2024-12-08 06:26:50.726760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9910 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.726807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.726832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.726845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.726857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.726869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.726881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.726881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.689 [2024-12-08 06:26:50.726893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.726905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.726907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d9480 with addr=10.0.0.2, port=4420 00:23:00.689 [2024-12-08 06:26:50.726917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.726922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9480 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.726929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.726941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.726953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.726965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.726976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.726988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.727009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.689 [2024-12-08 06:26:50.727013] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:00.690 [2024-12-08 06:26:50.727021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727086] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:00.690 [2024-12-08 06:26:50.727098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d9910 (9): Bad file descriptor 00:23:00.690 [2024-12-08 06:26:50.727471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d9480 (9): Bad file descriptor 00:23:00.690 [2024-12-08 06:26:50.727494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.690 [2024-12-08 06:26:50.727552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with [2024-12-08 06:26:50.727564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:00.690 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.690 [2024-12-08 06:26:50.727577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db8a0 is same with the state(6) to be set 00:23:00.690 [2024-12-08 06:26:50.727587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.690 [2024-12-08 06:26:50.727603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.690 [2024-12-08 06:26:50.727619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.690 [2024-12-08 06:26:50.727633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.690 [2024-12-08 06:26:50.727654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.690 [2024-12-08 06:26:50.727669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.690 [2024-12-08 06:26:50.727684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.690 [2024-12-08 06:26:50.727698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.690 [2024-12-08 06:26:50.727713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.690 [2024-12-08 06:26:50.727737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.690 [2024-12-08 06:26:50.727754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.690 [2024-12-08 06:26:50.727773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.690 [2024-12-08 06:26:50.727789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.690 [2024-12-08 06:26:50.727803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.690 [2024-12-08 06:26:50.727818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.690 [2024-12-08 06:26:50.727831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.690 [2024-12-08 06:26:50.727847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.690 [2024-12-08 06:26:50.727860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.690 [2024-12-08 06:26:50.727875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.690 [2024-12-08 06:26:50.727888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.690 [2024-12-08 06:26:50.727903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.690 [2024-12-08 06:26:50.727917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.690 [2024-12-08 06:26:50.727932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.690 [2024-12-08 06:26:50.727945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.690 [2024-12-08 06:26:50.727961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.690 [2024-12-08 06:26:50.727974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.690 [2024-12-08 06:26:50.727990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.690 [2024-12-08 06:26:50.728003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.690 [2024-12-08 06:26:50.728019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.690 [2024-12-08 06:26:50.728039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.690 [2024-12-08 06:26:50.728055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with [2024-12-08 06:26:50.728374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:00.691 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with [2024-12-08 06:26:50.728455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:12the state(6) to be set 00:23:00.691 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:12[2024-12-08 06:26:50.728518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with [2024-12-08 06:26:50.728532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:00.691 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with [2024-12-08 06:26:50.728641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:12the state(6) to be set 00:23:00.691 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.691 [2024-12-08 06:26:50.728796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.691 [2024-12-08 06:26:50.728803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.691 [2024-12-08 06:26:50.728814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.728820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.728827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.728834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.728841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.728850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.728854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.728864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.728867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.728880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with [2024-12-08 06:26:50.728880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:12the state(6) to be set 00:23:00.692 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.728895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.728897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.728907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.728913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.728920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.728927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.728932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.728943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.728945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.728957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-08 06:26:50.728959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.728976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.728979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.728988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.728994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.729011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.729014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.729024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.729029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.729037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.729046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.729050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.729060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.729063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.729076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.729077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.729089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.729091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.729101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.729107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.729114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.729121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.729127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.729137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:12[2024-12-08 06:26:50.729139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.729153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-08 06:26:50.729153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.729171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.729173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.729185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.729187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.729197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.729203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.729209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbc20 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.729218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.729238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.729253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.729270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.729284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.729299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.729312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.729334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.729348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.729363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.729376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.729391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.729405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.729419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.729433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.729448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.729461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.729476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.729493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.729509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.729522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.729537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.692 [2024-12-08 06:26:50.729551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.692 [2024-12-08 06:26:50.729565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b0aa0 is same with the state(6) to be set 00:23:00.692 [2024-12-08 06:26:50.729758] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:00.692 [2024-12-08 06:26:50.730053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:00.692 [2024-12-08 06:26:50.730074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:00.692 [2024-12-08 06:26:50.730091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:00.693 [2024-12-08 06:26:50.730107] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:00.693 [2024-12-08 06:26:50.730122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:00.693 [2024-12-08 06:26:50.730135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:00.693 [2024-12-08 06:26:50.730148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:00.693 [2024-12-08 06:26:50.730160] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:00.693 [2024-12-08 06:26:50.731408] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:00.693 [2024-12-08 06:26:50.731639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:00.693 [2024-12-08 06:26:50.731673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d5ad0 (9): Bad file descriptor 00:23:00.693 [2024-12-08 06:26:50.731790] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:00.693 [2024-12-08 06:26:50.732519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.693 [2024-12-08 06:26:50.732548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d5ad0 with addr=10.0.0.2, port=4420 00:23:00.693 [2024-12-08 06:26:50.732564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d5ad0 is same with the state(6) to be set 00:23:00.693 [2024-12-08 06:26:50.732645] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:00.693 [2024-12-08 06:26:50.732719] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:00.693 [2024-12-08 06:26:50.732764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d5ad0 (9): Bad file descriptor 00:23:00.693 [2024-12-08 06:26:50.732854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:00.693 [2024-12-08 06:26:50.732875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:00.693 [2024-12-08 06:26:50.732889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:00.693 [2024-12-08 06:26:50.732902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:00.693 [2024-12-08 06:26:50.734149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.693 [2024-12-08 06:26:50.734180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.693 [2024-12-08 06:26:50.734195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.693 [2024-12-08 06:26:50.734208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.693 [2024-12-08 06:26:50.734229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.693 [2024-12-08 06:26:50.734243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.693 [2024-12-08 06:26:50.734256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.693 [2024-12-08 06:26:50.734269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.693 [2024-12-08 06:26:50.734282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341110 is same with the state(6) to be set 00:23:00.693 [2024-12-08 06:26:50.734328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.693 [2024-12-08 06:26:50.734349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.693 [2024-12-08 06:26:50.734364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.693 [2024-12-08 06:26:50.734377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.693 [2024-12-08 06:26:50.734390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.693 [2024-12-08 06:26:50.734403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.693 [2024-12-08 06:26:50.734417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.693 [2024-12-08 06:26:50.734430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.693 [2024-12-08 06:26:50.734442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x284a360 is same with the state(6) to be set 00:23:00.693 [2024-12-08 06:26:50.734492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.693 [2024-12-08 06:26:50.734513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.693 [2024-12-08 06:26:50.734527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.693 [2024-12-08 06:26:50.734540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.693 [2024-12-08 06:26:50.734554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.693 [2024-12-08 06:26:50.734573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.693 [2024-12-08 06:26:50.734587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.693 [2024-12-08 06:26:50.734600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.693 [2024-12-08 06:26:50.734617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x282c4a0 is same with the state(6) to be set 00:23:00.693 [2024-12-08 06:26:50.734671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.693 [2024-12-08 06:26:50.734692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.693 [2024-12-08 06:26:50.734706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.693 [2024-12-08 06:26:50.734719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.693 [2024-12-08 06:26:50.734742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.693 [2024-12-08 06:26:50.734755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.693 [2024-12-08 06:26:50.734769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.693 [2024-12-08 06:26:50.734782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.693 [2024-12-08 06:26:50.734794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27fa730 is same with the state(6) to be set 00:23:00.693 [2024-12-08 06:26:50.734826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d6200 (9): Bad file descriptor 00:23:00.693 [2024-12-08 06:26:50.734862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2803820 (9): Bad file descriptor 00:23:00.693 [2024-12-08 06:26:50.734891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2834c10 (9): Bad file descriptor 00:23:00.693 [2024-12-08 06:26:50.735618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:00.693 [2024-12-08 06:26:50.735642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:00.693 [2024-12-08 06:26:50.735835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.693 [2024-12-08 06:26:50.735863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d9480 with addr=10.0.0.2, port=4420 00:23:00.693 [2024-12-08 06:26:50.735878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9480 is same with the state(6) to be set 00:23:00.693 [2024-12-08 06:26:50.736002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.693 [2024-12-08 06:26:50.736027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d9910 with addr=10.0.0.2, port=4420 00:23:00.693 [2024-12-08 06:26:50.736042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9910 is same with the state(6) to be set 00:23:00.693 [2024-12-08 06:26:50.736099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d9480 (9): Bad file descriptor 00:23:00.693 [2024-12-08 06:26:50.736122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d9910 (9): Bad file descriptor 00:23:00.693 [2024-12-08 06:26:50.736174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:00.693 [2024-12-08 06:26:50.736201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:00.693 [2024-12-08 06:26:50.736214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:00.693 [2024-12-08 06:26:50.736228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:00.693 [2024-12-08 06:26:50.736249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:00.693 [2024-12-08 06:26:50.736263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:00.693 [2024-12-08 06:26:50.736278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:00.694 [2024-12-08 06:26:50.736290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:00.694 [2024-12-08 06:26:50.742120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:00.694 [2024-12-08 06:26:50.742420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.694 [2024-12-08 06:26:50.742450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d5ad0 with addr=10.0.0.2, port=4420 00:23:00.694 [2024-12-08 06:26:50.742469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d5ad0 is same with the state(6) to be set 00:23:00.694 [2024-12-08 06:26:50.742552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d5ad0 (9): Bad file descriptor 00:23:00.694 [2024-12-08 06:26:50.742616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:00.694 [2024-12-08 06:26:50.742634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:00.694 [2024-12-08 06:26:50.742650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:00.694 [2024-12-08 06:26:50.742664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:00.694 [2024-12-08 06:26:50.744184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2341110 (9): Bad file descriptor 00:23:00.694 [2024-12-08 06:26:50.744227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x284a360 (9): Bad file descriptor 00:23:00.694 [2024-12-08 06:26:50.744260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x282c4a0 (9): Bad file descriptor 00:23:00.694 [2024-12-08 06:26:50.744293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27fa730 (9): Bad file descriptor 00:23:00.694 [2024-12-08 06:26:50.744473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.744501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.744532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.744548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.744565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.744579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.744596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.744610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.744627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.744651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.744667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.744696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.744719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.744747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.744763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.744778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.744793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.744809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.744825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.744839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.744854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.744868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.744884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.744897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.744913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.744927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.744942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.744956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.744971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.744985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.745011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.745025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.745040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.745054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.745070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.745083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.745103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.745118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.745134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.745148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.745164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.745177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.745192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.745206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.745230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.745244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.745259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.745273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.745288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.745302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.745318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.745332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.745347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.745360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.745376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.745391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.745407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.745420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.745436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.745450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.745466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.745483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.694 [2024-12-08 06:26:50.745500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.694 [2024-12-08 06:26:50.745514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.745529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.745543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.745570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.745584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.745600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.745614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.745630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.745644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.745660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.745674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.745691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.745712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.745738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.745754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.745770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.745784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.745800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.745821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.745837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.745852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.745868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.745882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.745902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.745917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.745933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.745947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.745963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.745977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.745994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.746015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.746031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.746045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.746061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.746075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.746091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.746105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.746120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.746136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.746153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.746167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.746183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.746197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.746223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.746237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.746252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.746266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.746282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.746299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.746316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.746330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.746345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.746359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.746375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.746399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.746415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.746429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.746444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.746459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.746475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.746489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.746505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.746519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.746535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.746548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.746563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27ddb10 is same with the state(6) to be set 00:23:00.695 [2024-12-08 06:26:50.747868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.747891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.747913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.747928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.695 [2024-12-08 06:26:50.747944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.695 [2024-12-08 06:26:50.747958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.747974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.747993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.748970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.748984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.749000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.749025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.696 [2024-12-08 06:26:50.749041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.696 [2024-12-08 06:26:50.749055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.749839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.749853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27deca0 is same with the state(6) to be set 00:23:00.697 [2024-12-08 06:26:50.751156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.751180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.751201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.751222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.751239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.751253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.751269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.751283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.751299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.751313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.751329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.751342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.751358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.751372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.751388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.751402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.751417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.751431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.751448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.751462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.751478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.697 [2024-12-08 06:26:50.751492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.697 [2024-12-08 06:26:50.751508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.751521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.751538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.751552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.751568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.751581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.751600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.751616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.751633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.751646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.751662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.751675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.751692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.751714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.751739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.751754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.751770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.751784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.751799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.751813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.751828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.751842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.751858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.751872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.751887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.751901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.751916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.751929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.751945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.751958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.751973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.751991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.752007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.752027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.752043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.752056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.752072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.752086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.752101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.752116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.752131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.752144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.752160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.752173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.752189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.752203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.752219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.752233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.752249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.752263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.752279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.752292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.752307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.752321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.752336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.752350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.698 [2024-12-08 06:26:50.752365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.698 [2024-12-08 06:26:50.752382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.752974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.752988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.753003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.753016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.753032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.753045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.753061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.753074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.753096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.753110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.753124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x261e4f0 is same with the state(6) to be set 00:23:00.699 [2024-12-08 06:26:50.754338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:00.699 [2024-12-08 06:26:50.754371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:00.699 [2024-12-08 06:26:50.754392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:00.699 [2024-12-08 06:26:50.754838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.699 [2024-12-08 06:26:50.754869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d6200 with addr=10.0.0.2, port=4420 00:23:00.699 [2024-12-08 06:26:50.754886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d6200 is same with the state(6) to be set 00:23:00.699 [2024-12-08 06:26:50.754981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.699 [2024-12-08 06:26:50.755007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2803820 with addr=10.0.0.2, port=4420 00:23:00.699 [2024-12-08 06:26:50.755023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2803820 is same with the state(6) to be set 00:23:00.699 [2024-12-08 06:26:50.755195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.699 [2024-12-08 06:26:50.755220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2834c10 with addr=10.0.0.2, port=4420 00:23:00.699 [2024-12-08 06:26:50.755245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2834c10 is same with the state(6) to be set 00:23:00.699 [2024-12-08 06:26:50.755893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.755916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.755938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.755953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.755969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.755983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.756000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.756013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.756031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.756045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.756060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.756074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.756090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.756104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.699 [2024-12-08 06:26:50.756120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.699 [2024-12-08 06:26:50.756139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.756972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.756988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.757001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.757028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.757042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.757058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.757072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.757088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.757102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.757117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.757131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.757147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.757161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.757176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.757190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.757206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.757220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.757236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.757249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.757265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.700 [2024-12-08 06:26:50.757279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.700 [2024-12-08 06:26:50.757300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.757860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.757874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27dffb0 is same with the state(6) to be set 00:23:00.701 [2024-12-08 06:26:50.759133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.759155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.759175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.759190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.759206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.759220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.759236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.759250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.759266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.759280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.759295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.759315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.759331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.759345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.759361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.759375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.759391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.759404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.759420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.759434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.759450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.759465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.759481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.759495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.759510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.759524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.759540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.759554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.759570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.759584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.759599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.759614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.759629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.759643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.759659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.701 [2024-12-08 06:26:50.759673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.701 [2024-12-08 06:26:50.759693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.759716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.759742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.759757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.759773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.759787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.759804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.759818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.759834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.759850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.759867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.759881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.759898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.759912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.759928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.759943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.759959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.759974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.759990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.702 [2024-12-08 06:26:50.760949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.702 [2024-12-08 06:26:50.760965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.760980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.760995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.761015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.761031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.761045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.761060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.761080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.761095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.761109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.761125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.761139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.761155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.761168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.761182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e12c0 is same with the state(6) to be set 00:23:00.703 [2024-12-08 06:26:50.762443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.762467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.762488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.762503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.762519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.762534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.762551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.762569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.762592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.762610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.762629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.762643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.762659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.762673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.762689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.762703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.762726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.762743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.762760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.762774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.762790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.762804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.762820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.762833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.762849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.762863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.762879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.762893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.762909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.762922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.762938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.762952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.762968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.762986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.763003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.763023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.763039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.763052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.763068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.763081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.763097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.763111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.763127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.763141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.763157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.763171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.763186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.763200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.763216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.763229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.763246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.763260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.763276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.763290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.703 [2024-12-08 06:26:50.763305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.703 [2024-12-08 06:26:50.763319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.763980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.763994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.764020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.764034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.764049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.764062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.764085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.764098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.764119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.764134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.764150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.764163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.764188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.764204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.764220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.764234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.764253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.764269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.764286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.764299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.764315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.764329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.764345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.764358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.764374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.764387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.764403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.764417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.764432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.764445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.764461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.764474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.764489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x3728070 is same with the state(6) to be set 00:23:00.704 [2024-12-08 06:26:50.765764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.765787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.765811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.765828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.704 [2024-12-08 06:26:50.765850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.704 [2024-12-08 06:26:50.765865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.765881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.765894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.765910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.765924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.765940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.765954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.765970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.765984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.766985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.766999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.767015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.767038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.767054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.767072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.767089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.705 [2024-12-08 06:26:50.767103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.705 [2024-12-08 06:26:50.767120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.706 [2024-12-08 06:26:50.767837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.706 [2024-12-08 06:26:50.767851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x261d200 is same with the state(6) to be set 00:23:00.706 [2024-12-08 06:26:50.769405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:00.706 [2024-12-08 06:26:50.769438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:00.706 [2024-12-08 06:26:50.769461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:00.706 [2024-12-08 06:26:50.769489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:00.706 [2024-12-08 06:26:50.769512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:00.706 [2024-12-08 06:26:50.769603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d6200 (9): Bad file descriptor 00:23:00.706 [2024-12-08 06:26:50.769630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2803820 (9): Bad file descriptor 00:23:00.706 [2024-12-08 06:26:50.769647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2834c10 (9): Bad file descriptor 00:23:00.706 [2024-12-08 06:26:50.769743] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:00.706 [2024-12-08 06:26:50.769773] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:23:00.706 [2024-12-08 06:26:50.769795] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:00.706 [2024-12-08 06:26:50.769813] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:00.706 [2024-12-08 06:26:50.769833] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:00.966 [2024-12-08 06:26:50.785761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:00.966 task offset: 31744 on job bdev=Nvme1n1 fails 00:23:00.966 00:23:00.966 Latency(us) 00:23:00.966 [2024-12-08T05:26:51.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.966 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.966 Job: Nvme1n1 ended in about 0.96 seconds with error 00:23:00.966 Verification LBA range: start 0x0 length 0x400 00:23:00.966 Nvme1n1 : 0.96 204.27 12.77 66.70 0.00 233455.57 7912.87 268746.15 00:23:00.966 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.966 Job: Nvme2n1 ended in about 0.96 seconds with error 00:23:00.966 Verification LBA range: start 0x0 length 0x400 00:23:00.966 Nvme2n1 : 0.96 199.87 12.49 66.62 0.00 232518.64 8155.59 253211.69 00:23:00.966 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.966 Job: Nvme3n1 ended in about 0.97 seconds with error 00:23:00.966 Verification LBA range: start 0x0 length 0x400 00:23:00.966 Nvme3n1 : 0.97 202.39 12.65 66.09 0.00 226004.26 9709.04 257872.02 00:23:00.966 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.966 Job: Nvme4n1 ended in about 0.98 seconds with error 00:23:00.966 Verification LBA range: start 0x0 length 0x400 00:23:00.966 Nvme4n1 : 0.98 194.96 12.19 64.99 0.00 228722.54 29709.65 259425.47 00:23:00.966 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.966 Job: Nvme5n1 ended in about 0.99 seconds with error 00:23:00.966 Verification LBA range: start 0x0 length 0x400 00:23:00.966 Nvme5n1 : 0.99 129.55 8.10 64.77 0.00 299602.17 22233.69 278066.82 00:23:00.966 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.966 Job: Nvme6n1 ended in about 1.00 seconds with error 00:23:00.966 Verification LBA range: start 0x0 length 0x400 00:23:00.966 Nvme6n1 : 1.00 128.51 8.03 64.25 0.00 295844.72 22427.88 278066.82 00:23:00.966 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.966 Job: Nvme7n1 ended in about 1.00 seconds with error 00:23:00.966 Verification LBA range: start 0x0 length 0x400 00:23:00.966 Nvme7n1 : 1.00 128.08 8.01 64.04 0.00 290410.70 20194.80 278066.82 00:23:00.966 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.966 Job: Nvme8n1 ended in about 1.00 seconds with error 00:23:00.966 Verification LBA range: start 0x0 length 0x400 00:23:00.966 Nvme8n1 : 1.00 127.66 7.98 63.83 0.00 284989.82 24660.95 309135.74 00:23:00.966 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.966 Job: Nvme9n1 ended in about 1.01 seconds with error 00:23:00.966 Verification LBA range: start 0x0 length 0x400 00:23:00.966 Nvme9n1 : 1.01 127.24 7.95 63.62 0.00 279909.26 22233.69 273406.48 00:23:00.966 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.966 Job: Nvme10n1 ended in about 0.99 seconds with error 00:23:00.966 Verification LBA range: start 0x0 length 0x400 00:23:00.966 Nvme10n1 : 0.99 129.12 8.07 64.56 0.00 268954.99 20388.98 295154.73 00:23:00.966 [2024-12-08T05:26:51.085Z] =================================================================================================================== 00:23:00.966 [2024-12-08T05:26:51.085Z] Total : 1571.65 98.23 649.48 0.00 259945.94 7912.87 309135.74 00:23:00.966 [2024-12-08 06:26:50.818198] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:00.966 [2024-12-08 06:26:50.818288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:00.966 1571.65 IOPS, 98.23 MiB/s [2024-12-08T05:26:51.085Z] [2024-12-08 06:26:50.818680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.966 [2024-12-08 06:26:50.818715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d9910 with addr=10.0.0.2, port=4420 00:23:00.966 [2024-12-08 06:26:50.818782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9910 is same with the state(6) to be set 00:23:00.966 [2024-12-08 06:26:50.818939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.966 [2024-12-08 06:26:50.818974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d9480 with addr=10.0.0.2, port=4420 00:23:00.966 [2024-12-08 06:26:50.818991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9480 is same with the state(6) to be set 00:23:00.966 [2024-12-08 06:26:50.819116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.966 [2024-12-08 06:26:50.819142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d5ad0 with addr=10.0.0.2, port=4420 00:23:00.966 [2024-12-08 06:26:50.819158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d5ad0 is same with the state(6) to be set 00:23:00.966 [2024-12-08 06:26:50.819335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.966 [2024-12-08 06:26:50.819361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2341110 with addr=10.0.0.2, port=4420 00:23:00.966 [2024-12-08 06:26:50.819377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341110 is same with the state(6) to be set 00:23:00.966 [2024-12-08 06:26:50.819537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.966 [2024-12-08 06:26:50.819563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27fa730 with addr=10.0.0.2, port=4420 00:23:00.966 [2024-12-08 06:26:50.819578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27fa730 is same with the state(6) to be set 00:23:00.966 [2024-12-08 06:26:50.819594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:00.966 [2024-12-08 06:26:50.819606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:00.966 [2024-12-08 06:26:50.819630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:00.966 [2024-12-08 06:26:50.819660] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:00.966 [2024-12-08 06:26:50.819679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:00.966 [2024-12-08 06:26:50.819691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:00.966 [2024-12-08 06:26:50.819704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:00.966 [2024-12-08 06:26:50.819716] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:00.966 [2024-12-08 06:26:50.819740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:00.966 [2024-12-08 06:26:50.819754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:00.966 [2024-12-08 06:26:50.819774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:00.966 [2024-12-08 06:26:50.819786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:00.966 [2024-12-08 06:26:50.819876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27fa730 (9): Bad file descriptor 00:23:00.966 [2024-12-08 06:26:50.819915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2341110 (9): Bad file descriptor 00:23:00.966 [2024-12-08 06:26:50.819939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d5ad0 (9): Bad file descriptor 00:23:00.966 [2024-12-08 06:26:50.819961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d9480 (9): Bad file descriptor 00:23:00.966 [2024-12-08 06:26:50.819983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d9910 (9): Bad file descriptor 00:23:00.966 [2024-12-08 06:26:50.821391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.966 [2024-12-08 06:26:50.821422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x284a360 with addr=10.0.0.2, port=4420 00:23:00.966 [2024-12-08 06:26:50.821438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x284a360 is same with the state(6) to be set 00:23:00.966 [2024-12-08 06:26:50.821558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.966 [2024-12-08 06:26:50.821583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x282c4a0 with addr=10.0.0.2, port=4420 00:23:00.966 [2024-12-08 06:26:50.821599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x282c4a0 is same with the state(6) to be set 00:23:00.966 [2024-12-08 06:26:50.821638] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:00.966 [2024-12-08 06:26:50.821660] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:00.966 [2024-12-08 06:26:50.821679] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:00.966 [2024-12-08 06:26:50.821698] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:00.966 [2024-12-08 06:26:50.821716] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:00.966 [2024-12-08 06:26:50.821743] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:00.966 [2024-12-08 06:26:50.821763] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:23:00.966 [2024-12-08 06:26:50.821783] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:00.966 [2024-12-08 06:26:50.822077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:00.966 [2024-12-08 06:26:50.822106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:00.966 [2024-12-08 06:26:50.822123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:00.967 [2024-12-08 06:26:50.822192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x284a360 (9): Bad file descriptor 00:23:00.967 [2024-12-08 06:26:50.822218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x282c4a0 (9): Bad file descriptor 00:23:00.967 [2024-12-08 06:26:50.822234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:00.967 [2024-12-08 06:26:50.822246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:00.967 [2024-12-08 06:26:50.822259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:00.967 [2024-12-08 06:26:50.822271] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:00.967 [2024-12-08 06:26:50.822285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:00.967 [2024-12-08 06:26:50.822297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:00.967 [2024-12-08 06:26:50.822309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:00.967 [2024-12-08 06:26:50.822321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:00.967 [2024-12-08 06:26:50.822333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:00.967 [2024-12-08 06:26:50.822345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:00.967 [2024-12-08 06:26:50.822357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:00.967 [2024-12-08 06:26:50.822368] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:00.967 [2024-12-08 06:26:50.822381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:00.967 [2024-12-08 06:26:50.822392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:00.967 [2024-12-08 06:26:50.822404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:00.967 [2024-12-08 06:26:50.822415] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:00.967 [2024-12-08 06:26:50.822428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:00.967 [2024-12-08 06:26:50.822439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:00.967 [2024-12-08 06:26:50.822451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:00.967 [2024-12-08 06:26:50.822462] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:00.967 [2024-12-08 06:26:50.822698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.967 [2024-12-08 06:26:50.822759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2834c10 with addr=10.0.0.2, port=4420 00:23:00.967 [2024-12-08 06:26:50.822778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2834c10 is same with the state(6) to be set 00:23:00.967 [2024-12-08 06:26:50.822867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.967 [2024-12-08 06:26:50.822893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2803820 with addr=10.0.0.2, port=4420 00:23:00.967 [2024-12-08 06:26:50.822908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2803820 is same with the state(6) to be set 00:23:00.967 [2024-12-08 06:26:50.823013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.967 [2024-12-08 06:26:50.823037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d6200 with addr=10.0.0.2, port=4420 00:23:00.967 [2024-12-08 06:26:50.823052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d6200 is same with the state(6) to be set 00:23:00.967 [2024-12-08 06:26:50.823066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:00.967 [2024-12-08 06:26:50.823078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:00.967 [2024-12-08 06:26:50.823091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:00.967 [2024-12-08 06:26:50.823104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:00.967 [2024-12-08 06:26:50.823118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:00.967 [2024-12-08 06:26:50.823130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:00.967 [2024-12-08 06:26:50.823142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:00.967 [2024-12-08 06:26:50.823154] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:00.967 [2024-12-08 06:26:50.823195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2834c10 (9): Bad file descriptor 00:23:00.967 [2024-12-08 06:26:50.823218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2803820 (9): Bad file descriptor 00:23:00.967 [2024-12-08 06:26:50.823235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d6200 (9): Bad file descriptor 00:23:00.967 [2024-12-08 06:26:50.823273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:00.967 [2024-12-08 06:26:50.823289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:00.967 [2024-12-08 06:26:50.823302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:00.967 [2024-12-08 06:26:50.823315] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:00.967 [2024-12-08 06:26:50.823328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:00.967 [2024-12-08 06:26:50.823340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:00.967 [2024-12-08 06:26:50.823352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:00.967 [2024-12-08 06:26:50.823364] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:00.967 [2024-12-08 06:26:50.823376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:00.967 [2024-12-08 06:26:50.823388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:00.967 [2024-12-08 06:26:50.823400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:00.967 [2024-12-08 06:26:50.823411] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:01.237 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:02.212 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1113684 00:23:02.212 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:02.212 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1113684 00:23:02.212 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:02.212 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.212 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:02.212 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.212 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1113684 00:23:02.212 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:02.212 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:02.212 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:02.212 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:02.212 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:02.212 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:02.212 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:02.212 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:02.213 rmmod nvme_tcp 00:23:02.213 rmmod nvme_fabrics 00:23:02.213 rmmod nvme_keyring 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1113510 ']' 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1113510 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1113510 ']' 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1113510 00:23:02.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1113510) - No such process 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1113510 is not found' 00:23:02.213 Process with pid 1113510 is not found 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.213 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:04.748 00:23:04.748 real 0m7.264s 00:23:04.748 user 0m17.567s 00:23:04.748 sys 0m1.465s 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:04.748 ************************************ 00:23:04.748 END TEST nvmf_shutdown_tc3 00:23:04.748 ************************************ 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:04.748 ************************************ 00:23:04.748 START TEST nvmf_shutdown_tc4 00:23:04.748 ************************************ 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.748 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:04.749 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:04.749 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:04.749 Found net devices under 0000:84:00.0: cvl_0_0 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:04.749 Found net devices under 0000:84:00.1: cvl_0_1 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:04.749 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:04.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:23:04.750 00:23:04.750 --- 10.0.0.2 ping statistics --- 00:23:04.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.750 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:04.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:23:04.750 00:23:04.750 --- 10.0.0.1 ping statistics --- 00:23:04.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.750 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1114543 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1114543 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1114543 ']' 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:04.750 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:04.750 [2024-12-08 06:26:54.633861] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:23:04.750 [2024-12-08 06:26:54.633965] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.750 [2024-12-08 06:26:54.711775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:04.750 [2024-12-08 06:26:54.772202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.750 [2024-12-08 06:26:54.772274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.750 [2024-12-08 06:26:54.772303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.750 [2024-12-08 06:26:54.772314] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.750 [2024-12-08 06:26:54.772324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.750 [2024-12-08 06:26:54.774139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.750 [2024-12-08 06:26:54.774202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:04.750 [2024-12-08 06:26:54.774266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:04.750 [2024-12-08 06:26:54.774270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.009 [2024-12-08 06:26:54.929805] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.009 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.010 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.010 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.010 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.010 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.010 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.010 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.010 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.010 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.010 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.010 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.010 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.010 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.010 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:05.010 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.010 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.010 Malloc1 00:23:05.010 [2024-12-08 06:26:55.035976] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.010 Malloc2 00:23:05.010 Malloc3 00:23:05.267 Malloc4 00:23:05.267 Malloc5 00:23:05.267 Malloc6 00:23:05.267 Malloc7 00:23:05.267 Malloc8 00:23:05.525 Malloc9 00:23:05.525 Malloc10 00:23:05.525 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.525 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:05.525 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:05.525 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.525 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1114650 00:23:05.525 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:05.525 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:05.525 [2024-12-08 06:26:55.578423] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:10.796 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:10.796 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1114543 00:23:10.796 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1114543 ']' 00:23:10.796 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1114543 00:23:10.796 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:10.796 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.796 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1114543 00:23:10.796 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:10.797 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:10.797 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1114543' 00:23:10.797 killing process with pid 1114543 00:23:10.797 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1114543 00:23:10.797 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1114543 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 [2024-12-08 06:27:00.578568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ [2024-12-08 06:27:00.578614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24218e0 is same with transport error -6 (No such device or address) on qpair id 2 00:23:10.797 the state(6) to be set 00:23:10.797 [2024-12-08 06:27:00.578682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24218e0 is same with the state(6) to be set 00:23:10.797 [2024-12-08 06:27:00.578698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24218e0 is same with the state(6) to be set 00:23:10.797 [2024-12-08 06:27:00.578711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24218e0 is same with the state(6) to be set 00:23:10.797 [2024-12-08 06:27:00.578732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24218e0 is same with the state(6) to be set 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 [2024-12-08 06:27:00.579771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:10.797 starting I/O failed: -6 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.797 Write completed with error (sct=0, sc=8) 00:23:10.797 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 [2024-12-08 06:27:00.581265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 [2024-12-08 06:27:00.583198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:10.798 NVMe io qpair process completion error 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 [2024-12-08 06:27:00.584480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:10.798 starting I/O failed: -6 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.798 Write completed with error (sct=0, sc=8) 00:23:10.798 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 [2024-12-08 06:27:00.585620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 [2024-12-08 06:27:00.586960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.799 Write completed with error (sct=0, sc=8) 00:23:10.799 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 [2024-12-08 06:27:00.589260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:10.800 NVMe io qpair process completion error 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 [2024-12-08 06:27:00.590596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 [2024-12-08 06:27:00.591824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 Write completed with error (sct=0, sc=8) 00:23:10.800 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 [2024-12-08 06:27:00.593138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 [2024-12-08 06:27:00.595068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:10.801 NVMe io qpair process completion error 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 [2024-12-08 06:27:00.596184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:10.801 starting I/O failed: -6 00:23:10.801 starting I/O failed: -6 00:23:10.801 starting I/O failed: -6 00:23:10.801 Write completed with error (sct=0, sc=8) 00:23:10.801 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 [2024-12-08 06:27:00.597402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 [2024-12-08 06:27:00.598729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.802 Write completed with error (sct=0, sc=8) 00:23:10.802 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 [2024-12-08 06:27:00.601486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:10.803 NVMe io qpair process completion error 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 [2024-12-08 06:27:00.602974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 [2024-12-08 06:27:00.604084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.803 starting I/O failed: -6 00:23:10.803 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 [2024-12-08 06:27:00.605342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 [2024-12-08 06:27:00.608609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:10.804 NVMe io qpair process completion error 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 [2024-12-08 06:27:00.609962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 starting I/O failed: -6 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.804 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 [2024-12-08 06:27:00.611144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 [2024-12-08 06:27:00.612417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.805 Write completed with error (sct=0, sc=8) 00:23:10.805 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 [2024-12-08 06:27:00.615409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:10.806 NVMe io qpair process completion error 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 [2024-12-08 06:27:00.616835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 [2024-12-08 06:27:00.617967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.806 Write completed with error (sct=0, sc=8) 00:23:10.806 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 [2024-12-08 06:27:00.619254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 [2024-12-08 06:27:00.621476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:10.807 NVMe io qpair process completion error 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 Write completed with error (sct=0, sc=8) 00:23:10.807 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 [2024-12-08 06:27:00.622985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 [2024-12-08 06:27:00.624059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 [2024-12-08 06:27:00.625416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.808 starting I/O failed: -6 00:23:10.808 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 [2024-12-08 06:27:00.628367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:10.809 NVMe io qpair process completion error 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 [2024-12-08 06:27:00.629815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 [2024-12-08 06:27:00.631098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.809 Write completed with error (sct=0, sc=8) 00:23:10.809 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 [2024-12-08 06:27:00.632475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 [2024-12-08 06:27:00.636165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:10.810 NVMe io qpair process completion error 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 starting I/O failed: -6 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.810 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 [2024-12-08 06:27:00.637616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 [2024-12-08 06:27:00.638882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 [2024-12-08 06:27:00.640207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.811 starting I/O failed: -6 00:23:10.811 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 Write completed with error (sct=0, sc=8) 00:23:10.812 starting I/O failed: -6 00:23:10.812 [2024-12-08 06:27:00.644886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:10.812 NVMe io qpair process completion error 00:23:10.812 Initializing NVMe Controllers 00:23:10.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:10.812 Controller IO queue size 128, less than required. 00:23:10.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:10.812 Controller IO queue size 128, less than required. 00:23:10.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:10.812 Controller IO queue size 128, less than required. 00:23:10.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:10.812 Controller IO queue size 128, less than required. 00:23:10.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:10.812 Controller IO queue size 128, less than required. 00:23:10.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:10.812 Controller IO queue size 128, less than required. 00:23:10.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:10.812 Controller IO queue size 128, less than required. 00:23:10.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:10.812 Controller IO queue size 128, less than required. 00:23:10.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:10.812 Controller IO queue size 128, less than required. 00:23:10.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:10.812 Controller IO queue size 128, less than required. 00:23:10.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:10.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:10.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:10.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:10.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:10.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:10.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:10.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:10.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:10.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:10.812 Initialization complete. Launching workers. 00:23:10.812 ======================================================== 00:23:10.812 Latency(us) 00:23:10.812 Device Information : IOPS MiB/s Average min max 00:23:10.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1619.58 69.59 79048.65 951.52 130401.54 00:23:10.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1728.19 74.26 74107.77 882.98 131260.23 00:23:10.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1663.41 71.47 77034.12 967.96 134919.41 00:23:10.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1707.03 73.35 75113.44 1196.85 130247.17 00:23:10.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1733.97 74.51 73989.23 1122.84 142312.70 00:23:10.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1732.90 74.46 74065.79 1113.86 145681.23 00:23:10.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1690.14 72.62 75984.50 1272.79 148811.25 00:23:10.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1694.41 72.81 75849.61 1187.32 129543.87 00:23:10.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1704.46 73.24 74388.68 1075.26 129859.12 00:23:10.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1696.76 72.91 74750.07 885.03 130006.23 00:23:10.812 ======================================================== 00:23:10.812 Total : 16970.85 729.22 75404.12 882.98 148811.25 00:23:10.812 00:23:10.812 [2024-12-08 06:27:00.650046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8b720 is same with the state(6) to be set 00:23:10.813 [2024-12-08 06:27:00.650165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e89d10 is same with the state(6) to be set 00:23:10.813 [2024-12-08 06:27:00.650226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8b900 is same with the state(6) to be set 00:23:10.813 [2024-12-08 06:27:00.650286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8a5f0 is same with the state(6) to be set 00:23:10.813 [2024-12-08 06:27:00.650344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8a920 is same with the state(6) to be set 00:23:10.813 [2024-12-08 06:27:00.650403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e899e0 is same with the state(6) to be set 00:23:10.813 [2024-12-08 06:27:00.650471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8a2c0 is same with the state(6) to be set 00:23:10.813 [2024-12-08 06:27:00.650529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8ac50 is same with the state(6) to be set 00:23:10.813 [2024-12-08 06:27:00.650588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bae0 is same with the state(6) to be set 00:23:10.813 [2024-12-08 06:27:00.650644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e896b0 is same with the state(6) to be set 00:23:10.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:11.073 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:12.010 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1114650 00:23:12.010 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:12.010 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1114650 00:23:12.010 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:12.010 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.010 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:12.011 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.011 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1114650 00:23:12.011 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:12.011 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:12.011 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:12.011 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:12.011 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:12.011 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:12.011 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:12.011 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:12.011 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:12.011 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:12.011 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:12.011 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:12.011 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:12.011 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:12.011 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:12.011 rmmod nvme_tcp 00:23:12.011 rmmod nvme_fabrics 00:23:12.011 rmmod nvme_keyring 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1114543 ']' 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1114543 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1114543 ']' 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1114543 00:23:12.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1114543) - No such process 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1114543 is not found' 00:23:12.271 Process with pid 1114543 is not found 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.175 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:14.175 00:23:14.175 real 0m9.813s 00:23:14.175 user 0m24.178s 00:23:14.175 sys 0m6.263s 00:23:14.175 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:14.175 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:14.175 ************************************ 00:23:14.175 END TEST nvmf_shutdown_tc4 00:23:14.175 ************************************ 00:23:14.175 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:14.175 00:23:14.175 real 0m37.481s 00:23:14.175 user 1m41.471s 00:23:14.175 sys 0m12.822s 00:23:14.175 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:14.175 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:14.175 ************************************ 00:23:14.175 END TEST nvmf_shutdown 00:23:14.175 ************************************ 00:23:14.175 06:27:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:14.175 06:27:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:14.175 06:27:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:14.175 06:27:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:14.175 ************************************ 00:23:14.175 START TEST nvmf_nsid 00:23:14.175 ************************************ 00:23:14.175 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:14.434 * Looking for test storage... 00:23:14.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:14.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.434 --rc genhtml_branch_coverage=1 00:23:14.434 --rc genhtml_function_coverage=1 00:23:14.434 --rc genhtml_legend=1 00:23:14.434 --rc geninfo_all_blocks=1 00:23:14.434 --rc geninfo_unexecuted_blocks=1 00:23:14.434 00:23:14.434 ' 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:14.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.434 --rc genhtml_branch_coverage=1 00:23:14.434 --rc genhtml_function_coverage=1 00:23:14.434 --rc genhtml_legend=1 00:23:14.434 --rc geninfo_all_blocks=1 00:23:14.434 --rc geninfo_unexecuted_blocks=1 00:23:14.434 00:23:14.434 ' 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:14.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.434 --rc genhtml_branch_coverage=1 00:23:14.434 --rc genhtml_function_coverage=1 00:23:14.434 --rc genhtml_legend=1 00:23:14.434 --rc geninfo_all_blocks=1 00:23:14.434 --rc geninfo_unexecuted_blocks=1 00:23:14.434 00:23:14.434 ' 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:14.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.434 --rc genhtml_branch_coverage=1 00:23:14.434 --rc genhtml_function_coverage=1 00:23:14.434 --rc genhtml_legend=1 00:23:14.434 --rc geninfo_all_blocks=1 00:23:14.434 --rc geninfo_unexecuted_blocks=1 00:23:14.434 00:23:14.434 ' 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.434 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:14.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:14.435 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:16.967 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:16.968 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:16.968 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:16.968 Found net devices under 0000:84:00.0: cvl_0_0 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:16.968 Found net devices under 0000:84:00.1: cvl_0_1 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:16.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:23:16.968 00:23:16.968 --- 10.0.0.2 ping statistics --- 00:23:16.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.968 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:23:16.968 00:23:16.968 --- 10.0.0.1 ping statistics --- 00:23:16.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.968 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:16.968 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:16.969 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1117516 00:23:16.969 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:16.969 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1117516 00:23:16.969 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1117516 ']' 00:23:16.969 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.969 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.969 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.969 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.969 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:16.969 [2024-12-08 06:27:06.817193] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:23:16.969 [2024-12-08 06:27:06.817268] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.969 [2024-12-08 06:27:06.885764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.969 [2024-12-08 06:27:06.944327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.969 [2024-12-08 06:27:06.944384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.969 [2024-12-08 06:27:06.944397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.969 [2024-12-08 06:27:06.944407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.969 [2024-12-08 06:27:06.944416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.969 [2024-12-08 06:27:06.945115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1117588 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=94b0f538-6246-4545-a370-7a0bbc1c6a7b 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=c787d0af-b7dd-4c13-a968-d7abef37f2b1 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=ca0fbb33-dc50-4b23-a0e3-5207602b12e3 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:17.227 null0 00:23:17.227 null1 00:23:17.227 null2 00:23:17.227 [2024-12-08 06:27:07.156812] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.227 [2024-12-08 06:27:07.173825] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:23:17.227 [2024-12-08 06:27:07.173903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1117588 ] 00:23:17.227 [2024-12-08 06:27:07.181053] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1117588 /var/tmp/tgt2.sock 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1117588 ']' 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:17.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.227 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:17.227 [2024-12-08 06:27:07.242790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.227 [2024-12-08 06:27:07.299766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.487 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.487 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:17.487 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:18.053 [2024-12-08 06:27:07.994756] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.053 [2024-12-08 06:27:08.010992] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:18.053 nvme0n1 nvme0n2 00:23:18.053 nvme1n1 00:23:18.053 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:18.053 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:18.053 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:18.620 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:18.620 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:18.620 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:18.620 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:18.620 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:18.620 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:18.620 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:18.620 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:18.620 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:18.620 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:18.620 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:18.620 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:18.620 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:19.554 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:19.554 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:19.554 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:19.554 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:19.554 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:19.554 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 94b0f538-6246-4545-a370-7a0bbc1c6a7b 00:23:19.554 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:19.554 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:19.554 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:19.554 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:19.554 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=94b0f53862464545a3707a0bbc1c6a7b 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 94B0F53862464545A3707A0BBC1C6A7B 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 94B0F53862464545A3707A0BBC1C6A7B == \9\4\B\0\F\5\3\8\6\2\4\6\4\5\4\5\A\3\7\0\7\A\0\B\B\C\1\C\6\A\7\B ]] 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid c787d0af-b7dd-4c13-a968-d7abef37f2b1 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c787d0afb7dd4c13a968d7abef37f2b1 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C787D0AFB7DD4C13A968D7ABEF37F2B1 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ C787D0AFB7DD4C13A968D7ABEF37F2B1 == \C\7\8\7\D\0\A\F\B\7\D\D\4\C\1\3\A\9\6\8\D\7\A\B\E\F\3\7\F\2\B\1 ]] 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid ca0fbb33-dc50-4b23-a0e3-5207602b12e3 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ca0fbb33dc504b23a0e35207602b12e3 00:23:19.812 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo CA0FBB33DC504B23A0E35207602B12E3 00:23:19.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ CA0FBB33DC504B23A0E35207602B12E3 == \C\A\0\F\B\B\3\3\D\C\5\0\4\B\2\3\A\0\E\3\5\2\0\7\6\0\2\B\1\2\E\3 ]] 00:23:19.813 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:20.070 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:20.070 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:20.070 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1117588 00:23:20.070 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1117588 ']' 00:23:20.070 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1117588 00:23:20.070 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:20.070 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.071 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1117588 00:23:20.071 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:20.071 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:20.071 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1117588' 00:23:20.071 killing process with pid 1117588 00:23:20.071 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1117588 00:23:20.071 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1117588 00:23:20.329 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:20.329 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:20.329 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:20.329 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:20.329 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:20.329 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:20.329 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:20.329 rmmod nvme_tcp 00:23:20.329 rmmod nvme_fabrics 00:23:20.329 rmmod nvme_keyring 00:23:20.329 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:20.329 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:20.329 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:20.329 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1117516 ']' 00:23:20.329 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1117516 00:23:20.329 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1117516 ']' 00:23:20.329 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1117516 00:23:20.329 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:20.329 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.589 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1117516 00:23:20.589 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.589 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.589 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1117516' 00:23:20.589 killing process with pid 1117516 00:23:20.589 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1117516 00:23:20.589 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1117516 00:23:20.589 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:20.589 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:20.589 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:20.589 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:20.589 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:20.589 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:20.589 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:20.589 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:20.589 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:20.589 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.589 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.589 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.130 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:23.130 00:23:23.130 real 0m8.462s 00:23:23.130 user 0m8.320s 00:23:23.130 sys 0m2.706s 00:23:23.130 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.130 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:23.130 ************************************ 00:23:23.130 END TEST nvmf_nsid 00:23:23.130 ************************************ 00:23:23.130 06:27:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:23.130 00:23:23.130 real 11m40.376s 00:23:23.130 user 27m29.739s 00:23:23.130 sys 2m55.439s 00:23:23.130 06:27:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.130 06:27:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:23.130 ************************************ 00:23:23.130 END TEST nvmf_target_extra 00:23:23.130 ************************************ 00:23:23.130 06:27:12 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:23.130 06:27:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:23.130 06:27:12 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.130 06:27:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:23.130 ************************************ 00:23:23.130 START TEST nvmf_host 00:23:23.130 ************************************ 00:23:23.130 06:27:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:23.130 * Looking for test storage... 00:23:23.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:23.130 06:27:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:23.130 06:27:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:23.130 06:27:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:23.130 06:27:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:23.130 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.130 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.130 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.130 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.130 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.130 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:23.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.131 --rc genhtml_branch_coverage=1 00:23:23.131 --rc genhtml_function_coverage=1 00:23:23.131 --rc genhtml_legend=1 00:23:23.131 --rc geninfo_all_blocks=1 00:23:23.131 --rc geninfo_unexecuted_blocks=1 00:23:23.131 00:23:23.131 ' 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:23.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.131 --rc genhtml_branch_coverage=1 00:23:23.131 --rc genhtml_function_coverage=1 00:23:23.131 --rc genhtml_legend=1 00:23:23.131 --rc geninfo_all_blocks=1 00:23:23.131 --rc geninfo_unexecuted_blocks=1 00:23:23.131 00:23:23.131 ' 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:23.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.131 --rc genhtml_branch_coverage=1 00:23:23.131 --rc genhtml_function_coverage=1 00:23:23.131 --rc genhtml_legend=1 00:23:23.131 --rc geninfo_all_blocks=1 00:23:23.131 --rc geninfo_unexecuted_blocks=1 00:23:23.131 00:23:23.131 ' 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:23.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.131 --rc genhtml_branch_coverage=1 00:23:23.131 --rc genhtml_function_coverage=1 00:23:23.131 --rc genhtml_legend=1 00:23:23.131 --rc geninfo_all_blocks=1 00:23:23.131 --rc geninfo_unexecuted_blocks=1 00:23:23.131 00:23:23.131 ' 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.131 06:27:12 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.132 06:27:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.132 ************************************ 00:23:23.132 START TEST nvmf_multicontroller 00:23:23.132 ************************************ 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:23.132 * Looking for test storage... 00:23:23.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:23.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.132 --rc genhtml_branch_coverage=1 00:23:23.132 --rc genhtml_function_coverage=1 00:23:23.132 --rc genhtml_legend=1 00:23:23.132 --rc geninfo_all_blocks=1 00:23:23.132 --rc geninfo_unexecuted_blocks=1 00:23:23.132 00:23:23.132 ' 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:23.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.132 --rc genhtml_branch_coverage=1 00:23:23.132 --rc genhtml_function_coverage=1 00:23:23.132 --rc genhtml_legend=1 00:23:23.132 --rc geninfo_all_blocks=1 00:23:23.132 --rc geninfo_unexecuted_blocks=1 00:23:23.132 00:23:23.132 ' 00:23:23.132 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:23.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.132 --rc genhtml_branch_coverage=1 00:23:23.132 --rc genhtml_function_coverage=1 00:23:23.132 --rc genhtml_legend=1 00:23:23.132 --rc geninfo_all_blocks=1 00:23:23.132 --rc geninfo_unexecuted_blocks=1 00:23:23.132 00:23:23.132 ' 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:23.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.133 --rc genhtml_branch_coverage=1 00:23:23.133 --rc genhtml_function_coverage=1 00:23:23.133 --rc genhtml_legend=1 00:23:23.133 --rc geninfo_all_blocks=1 00:23:23.133 --rc geninfo_unexecuted_blocks=1 00:23:23.133 00:23:23.133 ' 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.133 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.134 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:23.134 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:23.134 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:23.134 06:27:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:25.669 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:25.669 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:25.669 Found net devices under 0000:84:00.0: cvl_0_0 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:25.669 Found net devices under 0000:84:00.1: cvl_0_1 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:25.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:23:25.669 00:23:25.669 --- 10.0.0.2 ping statistics --- 00:23:25.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.669 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:23:25.669 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:23:25.669 00:23:25.669 --- 10.0.0.1 ping statistics --- 00:23:25.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.670 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1120627 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1120627 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1120627 ']' 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.670 [2024-12-08 06:27:15.402206] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:23:25.670 [2024-12-08 06:27:15.402275] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.670 [2024-12-08 06:27:15.473417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:25.670 [2024-12-08 06:27:15.533189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.670 [2024-12-08 06:27:15.533257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.670 [2024-12-08 06:27:15.533270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.670 [2024-12-08 06:27:15.533281] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.670 [2024-12-08 06:27:15.533291] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.670 [2024-12-08 06:27:15.534991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.670 [2024-12-08 06:27:15.535052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.670 [2024-12-08 06:27:15.535055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.670 [2024-12-08 06:27:15.692213] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.670 Malloc0 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.670 [2024-12-08 06:27:15.753103] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.670 [2024-12-08 06:27:15.760914] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.670 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.927 Malloc1 00:23:25.927 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.927 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:25.927 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1120650 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1120650 /var/tmp/bdevperf.sock 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1120650 ']' 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.928 06:27:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.185 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.185 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:26.185 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:26.185 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.185 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.185 NVMe0n1 00:23:26.185 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.185 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.185 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:26.185 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.185 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.185 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.185 1 00:23:26.185 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.186 request: 00:23:26.186 { 00:23:26.186 "name": "NVMe0", 00:23:26.186 "trtype": "tcp", 00:23:26.186 "traddr": "10.0.0.2", 00:23:26.186 "adrfam": "ipv4", 00:23:26.186 "trsvcid": "4420", 00:23:26.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.186 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:26.186 "hostaddr": "10.0.0.1", 00:23:26.186 "prchk_reftag": false, 00:23:26.186 "prchk_guard": false, 00:23:26.186 "hdgst": false, 00:23:26.186 "ddgst": false, 00:23:26.186 "allow_unrecognized_csi": false, 00:23:26.186 "method": "bdev_nvme_attach_controller", 00:23:26.186 "req_id": 1 00:23:26.186 } 00:23:26.186 Got JSON-RPC error response 00:23:26.186 response: 00:23:26.186 { 00:23:26.186 "code": -114, 00:23:26.186 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:26.186 } 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.186 request: 00:23:26.186 { 00:23:26.186 "name": "NVMe0", 00:23:26.186 "trtype": "tcp", 00:23:26.186 "traddr": "10.0.0.2", 00:23:26.186 "adrfam": "ipv4", 00:23:26.186 "trsvcid": "4420", 00:23:26.186 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:26.186 "hostaddr": "10.0.0.1", 00:23:26.186 "prchk_reftag": false, 00:23:26.186 "prchk_guard": false, 00:23:26.186 "hdgst": false, 00:23:26.186 "ddgst": false, 00:23:26.186 "allow_unrecognized_csi": false, 00:23:26.186 "method": "bdev_nvme_attach_controller", 00:23:26.186 "req_id": 1 00:23:26.186 } 00:23:26.186 Got JSON-RPC error response 00:23:26.186 response: 00:23:26.186 { 00:23:26.186 "code": -114, 00:23:26.186 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:26.186 } 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.186 request: 00:23:26.186 { 00:23:26.186 "name": "NVMe0", 00:23:26.186 "trtype": "tcp", 00:23:26.186 "traddr": "10.0.0.2", 00:23:26.186 "adrfam": "ipv4", 00:23:26.186 "trsvcid": "4420", 00:23:26.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.186 "hostaddr": "10.0.0.1", 00:23:26.186 "prchk_reftag": false, 00:23:26.186 "prchk_guard": false, 00:23:26.186 "hdgst": false, 00:23:26.186 "ddgst": false, 00:23:26.186 "multipath": "disable", 00:23:26.186 "allow_unrecognized_csi": false, 00:23:26.186 "method": "bdev_nvme_attach_controller", 00:23:26.186 "req_id": 1 00:23:26.186 } 00:23:26.186 Got JSON-RPC error response 00:23:26.186 response: 00:23:26.186 { 00:23:26.186 "code": -114, 00:23:26.186 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:26.186 } 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.186 request: 00:23:26.186 { 00:23:26.186 "name": "NVMe0", 00:23:26.186 "trtype": "tcp", 00:23:26.186 "traddr": "10.0.0.2", 00:23:26.186 "adrfam": "ipv4", 00:23:26.186 "trsvcid": "4420", 00:23:26.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.186 "hostaddr": "10.0.0.1", 00:23:26.186 "prchk_reftag": false, 00:23:26.186 "prchk_guard": false, 00:23:26.186 "hdgst": false, 00:23:26.186 "ddgst": false, 00:23:26.186 "multipath": "failover", 00:23:26.186 "allow_unrecognized_csi": false, 00:23:26.186 "method": "bdev_nvme_attach_controller", 00:23:26.186 "req_id": 1 00:23:26.186 } 00:23:26.186 Got JSON-RPC error response 00:23:26.186 response: 00:23:26.186 { 00:23:26.186 "code": -114, 00:23:26.186 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:26.186 } 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.186 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.187 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.187 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.187 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.187 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.187 NVMe0n1 00:23:26.444 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.444 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.444 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.444 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.444 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.444 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:26.444 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.444 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.444 00:23:26.444 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.444 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.444 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:26.444 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.444 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.444 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.444 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:26.444 06:27:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:27.826 { 00:23:27.826 "results": [ 00:23:27.826 { 00:23:27.826 "job": "NVMe0n1", 00:23:27.826 "core_mask": "0x1", 00:23:27.826 "workload": "write", 00:23:27.826 "status": "finished", 00:23:27.826 "queue_depth": 128, 00:23:27.826 "io_size": 4096, 00:23:27.826 "runtime": 1.006826, 00:23:27.826 "iops": 18673.534453818236, 00:23:27.826 "mibps": 72.94349396022749, 00:23:27.826 "io_failed": 0, 00:23:27.826 "io_timeout": 0, 00:23:27.826 "avg_latency_us": 6843.649154359401, 00:23:27.826 "min_latency_us": 2961.256296296296, 00:23:27.826 "max_latency_us": 12621.748148148148 00:23:27.826 } 00:23:27.826 ], 00:23:27.826 "core_count": 1 00:23:27.826 } 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1120650 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1120650 ']' 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1120650 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1120650 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1120650' 00:23:27.826 killing process with pid 1120650 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1120650 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1120650 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:27.826 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:27.826 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:27.826 [2024-12-08 06:27:15.868153] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:23:27.827 [2024-12-08 06:27:15.868236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120650 ] 00:23:27.827 [2024-12-08 06:27:15.937569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.827 [2024-12-08 06:27:15.996228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.827 [2024-12-08 06:27:16.422559] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name b65977fe-651a-4127-8f35-6534e7c7c934 already exists 00:23:27.827 [2024-12-08 06:27:16.422598] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:b65977fe-651a-4127-8f35-6534e7c7c934 alias for bdev NVMe1n1 00:23:27.827 [2024-12-08 06:27:16.422613] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:27.827 Running I/O for 1 seconds... 00:23:27.827 18673.00 IOPS, 72.94 MiB/s 00:23:27.827 Latency(us) 00:23:27.827 [2024-12-08T05:27:17.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.827 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:27.827 NVMe0n1 : 1.01 18673.53 72.94 0.00 0.00 6843.65 2961.26 12621.75 00:23:27.827 [2024-12-08T05:27:17.946Z] =================================================================================================================== 00:23:27.827 [2024-12-08T05:27:17.946Z] Total : 18673.53 72.94 0.00 0.00 6843.65 2961.26 12621.75 00:23:27.827 Received shutdown signal, test time was about 1.000000 seconds 00:23:27.827 00:23:27.827 Latency(us) 00:23:27.827 [2024-12-08T05:27:17.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.827 [2024-12-08T05:27:17.946Z] =================================================================================================================== 00:23:27.827 [2024-12-08T05:27:17.946Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.827 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:27.827 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:27.827 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:27.827 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:27.827 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:27.827 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:27.827 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:27.827 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:27.827 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:27.827 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:27.827 rmmod nvme_tcp 00:23:27.827 rmmod nvme_fabrics 00:23:27.827 rmmod nvme_keyring 00:23:28.084 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.084 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:28.084 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:28.084 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1120627 ']' 00:23:28.084 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1120627 00:23:28.084 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1120627 ']' 00:23:28.084 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1120627 00:23:28.084 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:28.084 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.084 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1120627 00:23:28.084 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:28.084 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:28.084 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1120627' 00:23:28.084 killing process with pid 1120627 00:23:28.084 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1120627 00:23:28.084 06:27:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1120627 00:23:28.344 06:27:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:28.344 06:27:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:28.344 06:27:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:28.344 06:27:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:28.344 06:27:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:28.344 06:27:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:28.344 06:27:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:28.344 06:27:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:28.344 06:27:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:28.344 06:27:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.344 06:27:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.344 06:27:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.248 06:27:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:30.248 00:23:30.248 real 0m7.297s 00:23:30.248 user 0m10.942s 00:23:30.248 sys 0m2.419s 00:23:30.248 06:27:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:30.248 06:27:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.248 ************************************ 00:23:30.248 END TEST nvmf_multicontroller 00:23:30.248 ************************************ 00:23:30.248 06:27:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:30.248 06:27:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:30.248 06:27:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:30.248 06:27:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.248 ************************************ 00:23:30.248 START TEST nvmf_aer 00:23:30.248 ************************************ 00:23:30.248 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:30.507 * Looking for test storage... 00:23:30.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:30.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.507 --rc genhtml_branch_coverage=1 00:23:30.507 --rc genhtml_function_coverage=1 00:23:30.507 --rc genhtml_legend=1 00:23:30.507 --rc geninfo_all_blocks=1 00:23:30.507 --rc geninfo_unexecuted_blocks=1 00:23:30.507 00:23:30.507 ' 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:30.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.507 --rc genhtml_branch_coverage=1 00:23:30.507 --rc genhtml_function_coverage=1 00:23:30.507 --rc genhtml_legend=1 00:23:30.507 --rc geninfo_all_blocks=1 00:23:30.507 --rc geninfo_unexecuted_blocks=1 00:23:30.507 00:23:30.507 ' 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:30.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.507 --rc genhtml_branch_coverage=1 00:23:30.507 --rc genhtml_function_coverage=1 00:23:30.507 --rc genhtml_legend=1 00:23:30.507 --rc geninfo_all_blocks=1 00:23:30.507 --rc geninfo_unexecuted_blocks=1 00:23:30.507 00:23:30.507 ' 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:30.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.507 --rc genhtml_branch_coverage=1 00:23:30.507 --rc genhtml_function_coverage=1 00:23:30.507 --rc genhtml_legend=1 00:23:30.507 --rc geninfo_all_blocks=1 00:23:30.507 --rc geninfo_unexecuted_blocks=1 00:23:30.507 00:23:30.507 ' 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.507 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:30.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:30.508 06:27:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:33.039 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:33.039 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:33.039 Found net devices under 0000:84:00.0: cvl_0_0 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:33.039 Found net devices under 0000:84:00.1: cvl_0_1 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.039 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:33.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:23:33.040 00:23:33.040 --- 10.0.0.2 ping statistics --- 00:23:33.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.040 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:23:33.040 00:23:33.040 --- 10.0.0.1 ping statistics --- 00:23:33.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.040 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1122881 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1122881 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1122881 ']' 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.040 06:27:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.040 [2024-12-08 06:27:22.787072] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:23:33.040 [2024-12-08 06:27:22.787169] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.040 [2024-12-08 06:27:22.861628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:33.040 [2024-12-08 06:27:22.922232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.040 [2024-12-08 06:27:22.922316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.040 [2024-12-08 06:27:22.922329] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.040 [2024-12-08 06:27:22.922340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.040 [2024-12-08 06:27:22.922349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.040 [2024-12-08 06:27:22.924142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.040 [2024-12-08 06:27:22.924205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.040 [2024-12-08 06:27:22.924273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:33.040 [2024-12-08 06:27:22.924276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.040 [2024-12-08 06:27:23.079779] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.040 Malloc0 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.040 [2024-12-08 06:27:23.139358] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.040 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.040 [ 00:23:33.040 { 00:23:33.040 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:33.040 "subtype": "Discovery", 00:23:33.040 "listen_addresses": [], 00:23:33.040 "allow_any_host": true, 00:23:33.040 "hosts": [] 00:23:33.040 }, 00:23:33.040 { 00:23:33.040 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.040 "subtype": "NVMe", 00:23:33.040 "listen_addresses": [ 00:23:33.040 { 00:23:33.040 "trtype": "TCP", 00:23:33.040 "adrfam": "IPv4", 00:23:33.040 "traddr": "10.0.0.2", 00:23:33.040 "trsvcid": "4420" 00:23:33.040 } 00:23:33.040 ], 00:23:33.040 "allow_any_host": true, 00:23:33.040 "hosts": [], 00:23:33.040 "serial_number": "SPDK00000000000001", 00:23:33.040 "model_number": "SPDK bdev Controller", 00:23:33.041 "max_namespaces": 2, 00:23:33.041 "min_cntlid": 1, 00:23:33.041 "max_cntlid": 65519, 00:23:33.041 "namespaces": [ 00:23:33.041 { 00:23:33.041 "nsid": 1, 00:23:33.041 "bdev_name": "Malloc0", 00:23:33.041 "name": "Malloc0", 00:23:33.041 "nguid": "847642AF959F4C4AAAF3DA8A38DBAE35", 00:23:33.041 "uuid": "847642af-959f-4c4a-aaf3-da8a38dbae35" 00:23:33.041 } 00:23:33.041 ] 00:23:33.041 } 00:23:33.041 ] 00:23:33.041 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.041 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:33.041 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:33.041 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1122994 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.299 Malloc1 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.299 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.558 [ 00:23:33.558 { 00:23:33.558 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:33.558 "subtype": "Discovery", 00:23:33.558 "listen_addresses": [], 00:23:33.558 "allow_any_host": true, 00:23:33.558 "hosts": [] 00:23:33.558 }, 00:23:33.558 { 00:23:33.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.558 "subtype": "NVMe", 00:23:33.558 "listen_addresses": [ 00:23:33.558 { 00:23:33.558 "trtype": "TCP", 00:23:33.558 "adrfam": "IPv4", 00:23:33.558 "traddr": "10.0.0.2", 00:23:33.558 "trsvcid": "4420" 00:23:33.558 } 00:23:33.558 ], 00:23:33.558 "allow_any_host": true, 00:23:33.558 "hosts": [], 00:23:33.558 "serial_number": "SPDK00000000000001", 00:23:33.558 "model_number": "SPDK bdev Controller", 00:23:33.558 "max_namespaces": 2, 00:23:33.558 "min_cntlid": 1, 00:23:33.558 "max_cntlid": 65519, 00:23:33.558 "namespaces": [ 00:23:33.558 { 00:23:33.558 "nsid": 1, 00:23:33.558 "bdev_name": "Malloc0", 00:23:33.558 "name": "Malloc0", 00:23:33.558 "nguid": "847642AF959F4C4AAAF3DA8A38DBAE35", 00:23:33.558 "uuid": "847642af-959f-4c4a-aaf3-da8a38dbae35" 00:23:33.558 }, 00:23:33.558 { 00:23:33.558 "nsid": 2, 00:23:33.558 "bdev_name": "Malloc1", 00:23:33.558 "name": "Malloc1", 00:23:33.558 "nguid": "F7C44F53C1004A3A82A345015C6250DC", 00:23:33.558 "uuid": "f7c44f53-c100-4a3a-82a3-45015c6250dc" 00:23:33.558 } 00:23:33.558 ] 00:23:33.558 } 00:23:33.558 ] 00:23:33.558 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.558 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1122994 00:23:33.558 Asynchronous Event Request test 00:23:33.558 Attaching to 10.0.0.2 00:23:33.558 Attached to 10.0.0.2 00:23:33.558 Registering asynchronous event callbacks... 00:23:33.558 Starting namespace attribute notice tests for all controllers... 00:23:33.558 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:33.558 aer_cb - Changed Namespace 00:23:33.558 Cleaning up... 00:23:33.558 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:33.558 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:33.559 rmmod nvme_tcp 00:23:33.559 rmmod nvme_fabrics 00:23:33.559 rmmod nvme_keyring 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1122881 ']' 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1122881 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1122881 ']' 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1122881 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1122881 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1122881' 00:23:33.559 killing process with pid 1122881 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1122881 00:23:33.559 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1122881 00:23:33.824 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:33.824 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:33.824 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:33.824 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:33.824 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:33.824 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:33.824 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:33.824 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:33.824 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:33.824 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.824 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.824 06:27:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.781 06:27:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:35.781 00:23:35.781 real 0m5.470s 00:23:35.781 user 0m4.235s 00:23:35.781 sys 0m2.002s 00:23:35.781 06:27:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:35.781 06:27:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.782 ************************************ 00:23:35.782 END TEST nvmf_aer 00:23:35.782 ************************************ 00:23:35.782 06:27:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:35.782 06:27:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:35.782 06:27:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:35.782 06:27:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.782 ************************************ 00:23:35.782 START TEST nvmf_async_init 00:23:35.782 ************************************ 00:23:35.782 06:27:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:36.041 * Looking for test storage... 00:23:36.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:36.041 06:27:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:36.041 06:27:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:23:36.041 06:27:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:36.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.041 --rc genhtml_branch_coverage=1 00:23:36.041 --rc genhtml_function_coverage=1 00:23:36.041 --rc genhtml_legend=1 00:23:36.041 --rc geninfo_all_blocks=1 00:23:36.041 --rc geninfo_unexecuted_blocks=1 00:23:36.041 00:23:36.041 ' 00:23:36.041 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:36.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.042 --rc genhtml_branch_coverage=1 00:23:36.042 --rc genhtml_function_coverage=1 00:23:36.042 --rc genhtml_legend=1 00:23:36.042 --rc geninfo_all_blocks=1 00:23:36.042 --rc geninfo_unexecuted_blocks=1 00:23:36.042 00:23:36.042 ' 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:36.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.042 --rc genhtml_branch_coverage=1 00:23:36.042 --rc genhtml_function_coverage=1 00:23:36.042 --rc genhtml_legend=1 00:23:36.042 --rc geninfo_all_blocks=1 00:23:36.042 --rc geninfo_unexecuted_blocks=1 00:23:36.042 00:23:36.042 ' 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:36.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.042 --rc genhtml_branch_coverage=1 00:23:36.042 --rc genhtml_function_coverage=1 00:23:36.042 --rc genhtml_legend=1 00:23:36.042 --rc geninfo_all_blocks=1 00:23:36.042 --rc geninfo_unexecuted_blocks=1 00:23:36.042 00:23:36.042 ' 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:36.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ef3915b1b8b24430af634be086083d5a 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:36.042 06:27:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:38.575 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:38.576 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:38.576 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:38.576 Found net devices under 0000:84:00.0: cvl_0_0 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:38.576 Found net devices under 0000:84:00.1: cvl_0_1 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:38.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:23:38.576 00:23:38.576 --- 10.0.0.2 ping statistics --- 00:23:38.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.576 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:23:38.576 00:23:38.576 --- 10.0.0.1 ping statistics --- 00:23:38.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.576 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1124993 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1124993 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1124993 ']' 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.576 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:38.576 [2024-12-08 06:27:28.414888] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:23:38.576 [2024-12-08 06:27:28.414979] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.576 [2024-12-08 06:27:28.485599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.576 [2024-12-08 06:27:28.538414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.577 [2024-12-08 06:27:28.538479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.577 [2024-12-08 06:27:28.538501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.577 [2024-12-08 06:27:28.538511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.577 [2024-12-08 06:27:28.538521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.577 [2024-12-08 06:27:28.539340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.577 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.577 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:38.577 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:38.577 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:38.577 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:38.577 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.577 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:38.577 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.577 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:38.577 [2024-12-08 06:27:28.678975] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.577 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.577 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:38.577 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.577 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:38.577 null0 00:23:38.577 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.577 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:38.577 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.577 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ef3915b1b8b24430af634be086083d5a 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:38.835 [2024-12-08 06:27:28.719272] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:38.835 nvme0n1 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.835 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.093 [ 00:23:39.093 { 00:23:39.093 "name": "nvme0n1", 00:23:39.093 "aliases": [ 00:23:39.093 "ef3915b1-b8b2-4430-af63-4be086083d5a" 00:23:39.093 ], 00:23:39.093 "product_name": "NVMe disk", 00:23:39.093 "block_size": 512, 00:23:39.093 "num_blocks": 2097152, 00:23:39.093 "uuid": "ef3915b1-b8b2-4430-af63-4be086083d5a", 00:23:39.093 "numa_id": 1, 00:23:39.093 "assigned_rate_limits": { 00:23:39.093 "rw_ios_per_sec": 0, 00:23:39.093 "rw_mbytes_per_sec": 0, 00:23:39.093 "r_mbytes_per_sec": 0, 00:23:39.093 "w_mbytes_per_sec": 0 00:23:39.093 }, 00:23:39.093 "claimed": false, 00:23:39.093 "zoned": false, 00:23:39.093 "supported_io_types": { 00:23:39.093 "read": true, 00:23:39.093 "write": true, 00:23:39.093 "unmap": false, 00:23:39.093 "flush": true, 00:23:39.093 "reset": true, 00:23:39.093 "nvme_admin": true, 00:23:39.093 "nvme_io": true, 00:23:39.093 "nvme_io_md": false, 00:23:39.093 "write_zeroes": true, 00:23:39.093 "zcopy": false, 00:23:39.093 "get_zone_info": false, 00:23:39.093 "zone_management": false, 00:23:39.093 "zone_append": false, 00:23:39.093 "compare": true, 00:23:39.093 "compare_and_write": true, 00:23:39.093 "abort": true, 00:23:39.093 "seek_hole": false, 00:23:39.093 "seek_data": false, 00:23:39.093 "copy": true, 00:23:39.093 "nvme_iov_md": false 00:23:39.093 }, 00:23:39.093 "memory_domains": [ 00:23:39.093 { 00:23:39.093 "dma_device_id": "system", 00:23:39.093 "dma_device_type": 1 00:23:39.093 } 00:23:39.093 ], 00:23:39.093 "driver_specific": { 00:23:39.093 "nvme": [ 00:23:39.093 { 00:23:39.093 "trid": { 00:23:39.093 "trtype": "TCP", 00:23:39.093 "adrfam": "IPv4", 00:23:39.093 "traddr": "10.0.0.2", 00:23:39.093 "trsvcid": "4420", 00:23:39.093 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:39.093 }, 00:23:39.093 "ctrlr_data": { 00:23:39.093 "cntlid": 1, 00:23:39.093 "vendor_id": "0x8086", 00:23:39.093 "model_number": "SPDK bdev Controller", 00:23:39.093 "serial_number": "00000000000000000000", 00:23:39.093 "firmware_revision": "25.01", 00:23:39.093 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:39.093 "oacs": { 00:23:39.093 "security": 0, 00:23:39.093 "format": 0, 00:23:39.093 "firmware": 0, 00:23:39.093 "ns_manage": 0 00:23:39.093 }, 00:23:39.093 "multi_ctrlr": true, 00:23:39.093 "ana_reporting": false 00:23:39.093 }, 00:23:39.093 "vs": { 00:23:39.093 "nvme_version": "1.3" 00:23:39.093 }, 00:23:39.093 "ns_data": { 00:23:39.093 "id": 1, 00:23:39.093 "can_share": true 00:23:39.093 } 00:23:39.093 } 00:23:39.093 ], 00:23:39.093 "mp_policy": "active_passive" 00:23:39.093 } 00:23:39.093 } 00:23:39.093 ] 00:23:39.093 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.093 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:39.093 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.093 06:27:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.093 [2024-12-08 06:27:28.968784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:39.093 [2024-12-08 06:27:28.968873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d1e40 (9): Bad file descriptor 00:23:39.093 [2024-12-08 06:27:29.100862] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:39.093 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.093 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:39.093 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.093 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.093 [ 00:23:39.093 { 00:23:39.093 "name": "nvme0n1", 00:23:39.093 "aliases": [ 00:23:39.093 "ef3915b1-b8b2-4430-af63-4be086083d5a" 00:23:39.093 ], 00:23:39.093 "product_name": "NVMe disk", 00:23:39.093 "block_size": 512, 00:23:39.093 "num_blocks": 2097152, 00:23:39.093 "uuid": "ef3915b1-b8b2-4430-af63-4be086083d5a", 00:23:39.093 "numa_id": 1, 00:23:39.093 "assigned_rate_limits": { 00:23:39.093 "rw_ios_per_sec": 0, 00:23:39.093 "rw_mbytes_per_sec": 0, 00:23:39.093 "r_mbytes_per_sec": 0, 00:23:39.093 "w_mbytes_per_sec": 0 00:23:39.093 }, 00:23:39.093 "claimed": false, 00:23:39.093 "zoned": false, 00:23:39.093 "supported_io_types": { 00:23:39.093 "read": true, 00:23:39.093 "write": true, 00:23:39.093 "unmap": false, 00:23:39.093 "flush": true, 00:23:39.093 "reset": true, 00:23:39.093 "nvme_admin": true, 00:23:39.093 "nvme_io": true, 00:23:39.093 "nvme_io_md": false, 00:23:39.093 "write_zeroes": true, 00:23:39.093 "zcopy": false, 00:23:39.093 "get_zone_info": false, 00:23:39.093 "zone_management": false, 00:23:39.093 "zone_append": false, 00:23:39.093 "compare": true, 00:23:39.093 "compare_and_write": true, 00:23:39.093 "abort": true, 00:23:39.094 "seek_hole": false, 00:23:39.094 "seek_data": false, 00:23:39.094 "copy": true, 00:23:39.094 "nvme_iov_md": false 00:23:39.094 }, 00:23:39.094 "memory_domains": [ 00:23:39.094 { 00:23:39.094 "dma_device_id": "system", 00:23:39.094 "dma_device_type": 1 00:23:39.094 } 00:23:39.094 ], 00:23:39.094 "driver_specific": { 00:23:39.094 "nvme": [ 00:23:39.094 { 00:23:39.094 "trid": { 00:23:39.094 "trtype": "TCP", 00:23:39.094 "adrfam": "IPv4", 00:23:39.094 "traddr": "10.0.0.2", 00:23:39.094 "trsvcid": "4420", 00:23:39.094 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:39.094 }, 00:23:39.094 "ctrlr_data": { 00:23:39.094 "cntlid": 2, 00:23:39.094 "vendor_id": "0x8086", 00:23:39.094 "model_number": "SPDK bdev Controller", 00:23:39.094 "serial_number": "00000000000000000000", 00:23:39.094 "firmware_revision": "25.01", 00:23:39.094 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:39.094 "oacs": { 00:23:39.094 "security": 0, 00:23:39.094 "format": 0, 00:23:39.094 "firmware": 0, 00:23:39.094 "ns_manage": 0 00:23:39.094 }, 00:23:39.094 "multi_ctrlr": true, 00:23:39.094 "ana_reporting": false 00:23:39.094 }, 00:23:39.094 "vs": { 00:23:39.094 "nvme_version": "1.3" 00:23:39.094 }, 00:23:39.094 "ns_data": { 00:23:39.094 "id": 1, 00:23:39.094 "can_share": true 00:23:39.094 } 00:23:39.094 } 00:23:39.094 ], 00:23:39.094 "mp_policy": "active_passive" 00:23:39.094 } 00:23:39.094 } 00:23:39.094 ] 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.9h2VRVQbl9 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.9h2VRVQbl9 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.9h2VRVQbl9 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.094 [2024-12-08 06:27:29.161396] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:39.094 [2024-12-08 06:27:29.161552] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.094 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.094 [2024-12-08 06:27:29.177428] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.352 nvme0n1 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.352 [ 00:23:39.352 { 00:23:39.352 "name": "nvme0n1", 00:23:39.352 "aliases": [ 00:23:39.352 "ef3915b1-b8b2-4430-af63-4be086083d5a" 00:23:39.352 ], 00:23:39.352 "product_name": "NVMe disk", 00:23:39.352 "block_size": 512, 00:23:39.352 "num_blocks": 2097152, 00:23:39.352 "uuid": "ef3915b1-b8b2-4430-af63-4be086083d5a", 00:23:39.352 "numa_id": 1, 00:23:39.352 "assigned_rate_limits": { 00:23:39.352 "rw_ios_per_sec": 0, 00:23:39.352 "rw_mbytes_per_sec": 0, 00:23:39.352 "r_mbytes_per_sec": 0, 00:23:39.352 "w_mbytes_per_sec": 0 00:23:39.352 }, 00:23:39.352 "claimed": false, 00:23:39.352 "zoned": false, 00:23:39.352 "supported_io_types": { 00:23:39.352 "read": true, 00:23:39.352 "write": true, 00:23:39.352 "unmap": false, 00:23:39.352 "flush": true, 00:23:39.352 "reset": true, 00:23:39.352 "nvme_admin": true, 00:23:39.352 "nvme_io": true, 00:23:39.352 "nvme_io_md": false, 00:23:39.352 "write_zeroes": true, 00:23:39.352 "zcopy": false, 00:23:39.352 "get_zone_info": false, 00:23:39.352 "zone_management": false, 00:23:39.352 "zone_append": false, 00:23:39.352 "compare": true, 00:23:39.352 "compare_and_write": true, 00:23:39.352 "abort": true, 00:23:39.352 "seek_hole": false, 00:23:39.352 "seek_data": false, 00:23:39.352 "copy": true, 00:23:39.352 "nvme_iov_md": false 00:23:39.352 }, 00:23:39.352 "memory_domains": [ 00:23:39.352 { 00:23:39.352 "dma_device_id": "system", 00:23:39.352 "dma_device_type": 1 00:23:39.352 } 00:23:39.352 ], 00:23:39.352 "driver_specific": { 00:23:39.352 "nvme": [ 00:23:39.352 { 00:23:39.352 "trid": { 00:23:39.352 "trtype": "TCP", 00:23:39.352 "adrfam": "IPv4", 00:23:39.352 "traddr": "10.0.0.2", 00:23:39.352 "trsvcid": "4421", 00:23:39.352 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:39.352 }, 00:23:39.352 "ctrlr_data": { 00:23:39.352 "cntlid": 3, 00:23:39.352 "vendor_id": "0x8086", 00:23:39.352 "model_number": "SPDK bdev Controller", 00:23:39.352 "serial_number": "00000000000000000000", 00:23:39.352 "firmware_revision": "25.01", 00:23:39.352 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:39.352 "oacs": { 00:23:39.352 "security": 0, 00:23:39.352 "format": 0, 00:23:39.352 "firmware": 0, 00:23:39.352 "ns_manage": 0 00:23:39.352 }, 00:23:39.352 "multi_ctrlr": true, 00:23:39.352 "ana_reporting": false 00:23:39.352 }, 00:23:39.352 "vs": { 00:23:39.352 "nvme_version": "1.3" 00:23:39.352 }, 00:23:39.352 "ns_data": { 00:23:39.352 "id": 1, 00:23:39.352 "can_share": true 00:23:39.352 } 00:23:39.352 } 00:23:39.352 ], 00:23:39.352 "mp_policy": "active_passive" 00:23:39.352 } 00:23:39.352 } 00:23:39.352 ] 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.9h2VRVQbl9 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:39.352 rmmod nvme_tcp 00:23:39.352 rmmod nvme_fabrics 00:23:39.352 rmmod nvme_keyring 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1124993 ']' 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1124993 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1124993 ']' 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1124993 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1124993 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1124993' 00:23:39.352 killing process with pid 1124993 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1124993 00:23:39.352 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1124993 00:23:39.612 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:39.612 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:39.612 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:39.612 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:39.612 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:39.612 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:39.612 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:39.612 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:39.612 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:39.612 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.612 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.612 06:27:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.521 06:27:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:41.521 00:23:41.521 real 0m5.740s 00:23:41.521 user 0m2.254s 00:23:41.521 sys 0m1.917s 00:23:41.521 06:27:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:41.521 06:27:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:41.521 ************************************ 00:23:41.521 END TEST nvmf_async_init 00:23:41.521 ************************************ 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.780 ************************************ 00:23:41.780 START TEST dma 00:23:41.780 ************************************ 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:41.780 * Looking for test storage... 00:23:41.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:41.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.780 --rc genhtml_branch_coverage=1 00:23:41.780 --rc genhtml_function_coverage=1 00:23:41.780 --rc genhtml_legend=1 00:23:41.780 --rc geninfo_all_blocks=1 00:23:41.780 --rc geninfo_unexecuted_blocks=1 00:23:41.780 00:23:41.780 ' 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:41.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.780 --rc genhtml_branch_coverage=1 00:23:41.780 --rc genhtml_function_coverage=1 00:23:41.780 --rc genhtml_legend=1 00:23:41.780 --rc geninfo_all_blocks=1 00:23:41.780 --rc geninfo_unexecuted_blocks=1 00:23:41.780 00:23:41.780 ' 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:41.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.780 --rc genhtml_branch_coverage=1 00:23:41.780 --rc genhtml_function_coverage=1 00:23:41.780 --rc genhtml_legend=1 00:23:41.780 --rc geninfo_all_blocks=1 00:23:41.780 --rc geninfo_unexecuted_blocks=1 00:23:41.780 00:23:41.780 ' 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:41.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.780 --rc genhtml_branch_coverage=1 00:23:41.780 --rc genhtml_function_coverage=1 00:23:41.780 --rc genhtml_legend=1 00:23:41.780 --rc geninfo_all_blocks=1 00:23:41.780 --rc geninfo_unexecuted_blocks=1 00:23:41.780 00:23:41.780 ' 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.780 06:27:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:41.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:41.781 00:23:41.781 real 0m0.173s 00:23:41.781 user 0m0.116s 00:23:41.781 sys 0m0.067s 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:41.781 ************************************ 00:23:41.781 END TEST dma 00:23:41.781 ************************************ 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.781 ************************************ 00:23:41.781 START TEST nvmf_identify 00:23:41.781 ************************************ 00:23:41.781 06:27:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:42.040 * Looking for test storage... 00:23:42.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:42.040 06:27:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:42.040 06:27:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:23:42.040 06:27:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:42.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.040 --rc genhtml_branch_coverage=1 00:23:42.040 --rc genhtml_function_coverage=1 00:23:42.040 --rc genhtml_legend=1 00:23:42.040 --rc geninfo_all_blocks=1 00:23:42.040 --rc geninfo_unexecuted_blocks=1 00:23:42.040 00:23:42.040 ' 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:42.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.040 --rc genhtml_branch_coverage=1 00:23:42.040 --rc genhtml_function_coverage=1 00:23:42.040 --rc genhtml_legend=1 00:23:42.040 --rc geninfo_all_blocks=1 00:23:42.040 --rc geninfo_unexecuted_blocks=1 00:23:42.040 00:23:42.040 ' 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:42.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.040 --rc genhtml_branch_coverage=1 00:23:42.040 --rc genhtml_function_coverage=1 00:23:42.040 --rc genhtml_legend=1 00:23:42.040 --rc geninfo_all_blocks=1 00:23:42.040 --rc geninfo_unexecuted_blocks=1 00:23:42.040 00:23:42.040 ' 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:42.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.040 --rc genhtml_branch_coverage=1 00:23:42.040 --rc genhtml_function_coverage=1 00:23:42.040 --rc genhtml_legend=1 00:23:42.040 --rc geninfo_all_blocks=1 00:23:42.040 --rc geninfo_unexecuted_blocks=1 00:23:42.040 00:23:42.040 ' 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.040 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:42.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:42.041 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:42.041 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:42.041 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:42.041 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:42.041 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:42.041 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:42.041 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:42.041 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.041 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:42.041 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:42.041 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:42.041 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.041 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.041 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.041 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:42.041 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:42.041 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:42.041 06:27:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.586 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:44.587 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:44.587 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:44.587 Found net devices under 0000:84:00.0: cvl_0_0 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:44.587 Found net devices under 0000:84:00.1: cvl_0_1 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:44.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:23:44.587 00:23:44.587 --- 10.0.0.2 ping statistics --- 00:23:44.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.587 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:23:44.587 00:23:44.587 --- 10.0.0.1 ping statistics --- 00:23:44.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.587 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1127147 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1127147 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1127147 ']' 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.587 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.587 [2024-12-08 06:27:34.390553] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:23:44.587 [2024-12-08 06:27:34.390652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.587 [2024-12-08 06:27:34.465922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.587 [2024-12-08 06:27:34.523939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.587 [2024-12-08 06:27:34.523993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.588 [2024-12-08 06:27:34.524007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.588 [2024-12-08 06:27:34.524018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.588 [2024-12-08 06:27:34.524028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.588 [2024-12-08 06:27:34.525647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.588 [2024-12-08 06:27:34.525713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.588 [2024-12-08 06:27:34.525780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:44.588 [2024-12-08 06:27:34.525784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.588 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.588 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:44.588 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:44.588 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.588 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.588 [2024-12-08 06:27:34.653603] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.588 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.588 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:44.588 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:44.588 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.588 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:44.588 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.588 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.849 Malloc0 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.849 [2024-12-08 06:27:34.744968] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.849 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.849 [ 00:23:44.849 { 00:23:44.849 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:44.849 "subtype": "Discovery", 00:23:44.849 "listen_addresses": [ 00:23:44.849 { 00:23:44.849 "trtype": "TCP", 00:23:44.849 "adrfam": "IPv4", 00:23:44.849 "traddr": "10.0.0.2", 00:23:44.849 "trsvcid": "4420" 00:23:44.849 } 00:23:44.849 ], 00:23:44.849 "allow_any_host": true, 00:23:44.849 "hosts": [] 00:23:44.849 }, 00:23:44.849 { 00:23:44.849 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.849 "subtype": "NVMe", 00:23:44.849 "listen_addresses": [ 00:23:44.849 { 00:23:44.849 "trtype": "TCP", 00:23:44.849 "adrfam": "IPv4", 00:23:44.849 "traddr": "10.0.0.2", 00:23:44.849 "trsvcid": "4420" 00:23:44.850 } 00:23:44.850 ], 00:23:44.850 "allow_any_host": true, 00:23:44.850 "hosts": [], 00:23:44.850 "serial_number": "SPDK00000000000001", 00:23:44.850 "model_number": "SPDK bdev Controller", 00:23:44.850 "max_namespaces": 32, 00:23:44.850 "min_cntlid": 1, 00:23:44.850 "max_cntlid": 65519, 00:23:44.850 "namespaces": [ 00:23:44.850 { 00:23:44.850 "nsid": 1, 00:23:44.850 "bdev_name": "Malloc0", 00:23:44.850 "name": "Malloc0", 00:23:44.850 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:44.850 "eui64": "ABCDEF0123456789", 00:23:44.850 "uuid": "99296e61-130e-4a6a-957c-5f8c2af1a2fb" 00:23:44.850 } 00:23:44.850 ] 00:23:44.850 } 00:23:44.850 ] 00:23:44.850 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.850 06:27:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:44.850 [2024-12-08 06:27:34.787530] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:23:44.850 [2024-12-08 06:27:34.787575] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127294 ] 00:23:44.850 [2024-12-08 06:27:34.839061] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:44.850 [2024-12-08 06:27:34.839133] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:44.850 [2024-12-08 06:27:34.839144] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:44.850 [2024-12-08 06:27:34.839166] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:44.850 [2024-12-08 06:27:34.839181] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:44.850 [2024-12-08 06:27:34.843219] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:44.850 [2024-12-08 06:27:34.843279] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1518690 0 00:23:44.850 [2024-12-08 06:27:34.850734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:44.850 [2024-12-08 06:27:34.850758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:44.850 [2024-12-08 06:27:34.850798] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:44.850 [2024-12-08 06:27:34.850806] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:44.850 [2024-12-08 06:27:34.850863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.850876] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.850884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1518690) 00:23:44.850 [2024-12-08 06:27:34.850903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:44.850 [2024-12-08 06:27:34.850941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a100, cid 0, qid 0 00:23:44.850 [2024-12-08 06:27:34.858748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.850 [2024-12-08 06:27:34.858779] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.850 [2024-12-08 06:27:34.858787] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.858795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a100) on tqpair=0x1518690 00:23:44.850 [2024-12-08 06:27:34.858824] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:44.850 [2024-12-08 06:27:34.858838] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:44.850 [2024-12-08 06:27:34.858848] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:44.850 [2024-12-08 06:27:34.858880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.858889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.858895] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1518690) 00:23:44.850 [2024-12-08 06:27:34.858906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.850 [2024-12-08 06:27:34.858931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a100, cid 0, qid 0 00:23:44.850 [2024-12-08 06:27:34.859129] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.850 [2024-12-08 06:27:34.859143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.850 [2024-12-08 06:27:34.859150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.859157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a100) on tqpair=0x1518690 00:23:44.850 [2024-12-08 06:27:34.859171] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:44.850 [2024-12-08 06:27:34.859186] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:44.850 [2024-12-08 06:27:34.859197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.859205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.859210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1518690) 00:23:44.850 [2024-12-08 06:27:34.859220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.850 [2024-12-08 06:27:34.859242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a100, cid 0, qid 0 00:23:44.850 [2024-12-08 06:27:34.859397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.850 [2024-12-08 06:27:34.859410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.850 [2024-12-08 06:27:34.859417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.859423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a100) on tqpair=0x1518690 00:23:44.850 [2024-12-08 06:27:34.859433] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:44.850 [2024-12-08 06:27:34.859448] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:44.850 [2024-12-08 06:27:34.859459] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.859466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.859472] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1518690) 00:23:44.850 [2024-12-08 06:27:34.859481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.850 [2024-12-08 06:27:34.859502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a100, cid 0, qid 0 00:23:44.850 [2024-12-08 06:27:34.859588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.850 [2024-12-08 06:27:34.859602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.850 [2024-12-08 06:27:34.859608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.859615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a100) on tqpair=0x1518690 00:23:44.850 [2024-12-08 06:27:34.859627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:44.850 [2024-12-08 06:27:34.859645] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.859654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.859660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1518690) 00:23:44.850 [2024-12-08 06:27:34.859670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.850 [2024-12-08 06:27:34.859690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a100, cid 0, qid 0 00:23:44.850 [2024-12-08 06:27:34.859815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.850 [2024-12-08 06:27:34.859829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.850 [2024-12-08 06:27:34.859836] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.859843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a100) on tqpair=0x1518690 00:23:44.850 [2024-12-08 06:27:34.859851] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:44.850 [2024-12-08 06:27:34.859860] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:44.850 [2024-12-08 06:27:34.859872] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:44.850 [2024-12-08 06:27:34.859982] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:44.850 [2024-12-08 06:27:34.859990] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:44.850 [2024-12-08 06:27:34.860031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.860038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.860044] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1518690) 00:23:44.850 [2024-12-08 06:27:34.860054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.850 [2024-12-08 06:27:34.860076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a100, cid 0, qid 0 00:23:44.850 [2024-12-08 06:27:34.860217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.850 [2024-12-08 06:27:34.860230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.850 [2024-12-08 06:27:34.860237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.860243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a100) on tqpair=0x1518690 00:23:44.850 [2024-12-08 06:27:34.860252] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:44.850 [2024-12-08 06:27:34.860267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.860276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.860282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1518690) 00:23:44.850 [2024-12-08 06:27:34.860291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.850 [2024-12-08 06:27:34.860311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a100, cid 0, qid 0 00:23:44.850 [2024-12-08 06:27:34.860394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.850 [2024-12-08 06:27:34.860406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.850 [2024-12-08 06:27:34.860419] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.860426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a100) on tqpair=0x1518690 00:23:44.850 [2024-12-08 06:27:34.860433] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:44.850 [2024-12-08 06:27:34.860441] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:44.850 [2024-12-08 06:27:34.860454] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:44.850 [2024-12-08 06:27:34.860468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:44.850 [2024-12-08 06:27:34.860485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.860492] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1518690) 00:23:44.850 [2024-12-08 06:27:34.860502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.850 [2024-12-08 06:27:34.860523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a100, cid 0, qid 0 00:23:44.850 [2024-12-08 06:27:34.860681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.850 [2024-12-08 06:27:34.860695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.850 [2024-12-08 06:27:34.860716] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.860733] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1518690): datao=0, datal=4096, cccid=0 00:23:44.850 [2024-12-08 06:27:34.860741] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x157a100) on tqpair(0x1518690): expected_datao=0, payload_size=4096 00:23:44.850 [2024-12-08 06:27:34.860754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.860773] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.860783] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.901861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.850 [2024-12-08 06:27:34.901880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.850 [2024-12-08 06:27:34.901888] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.850 [2024-12-08 06:27:34.901895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a100) on tqpair=0x1518690 00:23:44.850 [2024-12-08 06:27:34.901914] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:44.850 [2024-12-08 06:27:34.901925] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:44.850 [2024-12-08 06:27:34.901932] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:44.850 [2024-12-08 06:27:34.901943] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:44.850 [2024-12-08 06:27:34.901951] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:44.850 [2024-12-08 06:27:34.901959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:44.851 [2024-12-08 06:27:34.901975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:44.851 [2024-12-08 06:27:34.901988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.901996] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.902017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1518690) 00:23:44.851 [2024-12-08 06:27:34.902033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:44.851 [2024-12-08 06:27:34.902057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a100, cid 0, qid 0 00:23:44.851 [2024-12-08 06:27:34.902159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.851 [2024-12-08 06:27:34.902173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.851 [2024-12-08 06:27:34.902180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.902187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a100) on tqpair=0x1518690 00:23:44.851 [2024-12-08 06:27:34.902199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.902206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.902212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1518690) 00:23:44.851 [2024-12-08 06:27:34.902221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.851 [2024-12-08 06:27:34.902231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.902237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.902243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1518690) 00:23:44.851 [2024-12-08 06:27:34.902251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.851 [2024-12-08 06:27:34.902260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.902266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.902272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1518690) 00:23:44.851 [2024-12-08 06:27:34.902280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.851 [2024-12-08 06:27:34.902289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.902295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.902301] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:44.851 [2024-12-08 06:27:34.902309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.851 [2024-12-08 06:27:34.902317] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:44.851 [2024-12-08 06:27:34.902337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:44.851 [2024-12-08 06:27:34.902350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.902356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1518690) 00:23:44.851 [2024-12-08 06:27:34.902366] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.851 [2024-12-08 06:27:34.902388] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a100, cid 0, qid 0 00:23:44.851 [2024-12-08 06:27:34.902399] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a280, cid 1, qid 0 00:23:44.851 [2024-12-08 06:27:34.902406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a400, cid 2, qid 0 00:23:44.851 [2024-12-08 06:27:34.902413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:44.851 [2024-12-08 06:27:34.902420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a700, cid 4, qid 0 00:23:44.851 [2024-12-08 06:27:34.902595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.851 [2024-12-08 06:27:34.902613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.851 [2024-12-08 06:27:34.902620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.902627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a700) on tqpair=0x1518690 00:23:44.851 [2024-12-08 06:27:34.902636] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:44.851 [2024-12-08 06:27:34.902645] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:44.851 [2024-12-08 06:27:34.902662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.902671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1518690) 00:23:44.851 [2024-12-08 06:27:34.902681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.851 [2024-12-08 06:27:34.902701] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a700, cid 4, qid 0 00:23:44.851 [2024-12-08 06:27:34.906749] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.851 [2024-12-08 06:27:34.906765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.851 [2024-12-08 06:27:34.906773] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.906779] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1518690): datao=0, datal=4096, cccid=4 00:23:44.851 [2024-12-08 06:27:34.906786] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x157a700) on tqpair(0x1518690): expected_datao=0, payload_size=4096 00:23:44.851 [2024-12-08 06:27:34.906793] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.906803] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.906811] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.906819] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.851 [2024-12-08 06:27:34.906828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.851 [2024-12-08 06:27:34.906834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.906841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a700) on tqpair=0x1518690 00:23:44.851 [2024-12-08 06:27:34.906862] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:44.851 [2024-12-08 06:27:34.906906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.906917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1518690) 00:23:44.851 [2024-12-08 06:27:34.906927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.851 [2024-12-08 06:27:34.906939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.906945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.906951] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1518690) 00:23:44.851 [2024-12-08 06:27:34.906960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.851 [2024-12-08 06:27:34.906988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a700, cid 4, qid 0 00:23:44.851 [2024-12-08 06:27:34.907000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a880, cid 5, qid 0 00:23:44.851 [2024-12-08 06:27:34.907217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.851 [2024-12-08 06:27:34.907231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.851 [2024-12-08 06:27:34.907238] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.907244] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1518690): datao=0, datal=1024, cccid=4 00:23:44.851 [2024-12-08 06:27:34.907255] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x157a700) on tqpair(0x1518690): expected_datao=0, payload_size=1024 00:23:44.851 [2024-12-08 06:27:34.907262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.907271] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.907278] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.907286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.851 [2024-12-08 06:27:34.907295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.851 [2024-12-08 06:27:34.907301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.907307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a880) on tqpair=0x1518690 00:23:44.851 [2024-12-08 06:27:34.947874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.851 [2024-12-08 06:27:34.947893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.851 [2024-12-08 06:27:34.947902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.947909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a700) on tqpair=0x1518690 00:23:44.851 [2024-12-08 06:27:34.947927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.947936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1518690) 00:23:44.851 [2024-12-08 06:27:34.947947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.851 [2024-12-08 06:27:34.947978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a700, cid 4, qid 0 00:23:44.851 [2024-12-08 06:27:34.948333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.851 [2024-12-08 06:27:34.948348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.851 [2024-12-08 06:27:34.948355] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.948369] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1518690): datao=0, datal=3072, cccid=4 00:23:44.851 [2024-12-08 06:27:34.948376] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x157a700) on tqpair(0x1518690): expected_datao=0, payload_size=3072 00:23:44.851 [2024-12-08 06:27:34.948383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.948393] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.948400] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.948412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.851 [2024-12-08 06:27:34.948421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.851 [2024-12-08 06:27:34.948428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.948434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a700) on tqpair=0x1518690 00:23:44.851 [2024-12-08 06:27:34.948449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.948457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1518690) 00:23:44.851 [2024-12-08 06:27:34.948467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.851 [2024-12-08 06:27:34.948495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a700, cid 4, qid 0 00:23:44.851 [2024-12-08 06:27:34.948658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.851 [2024-12-08 06:27:34.948672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.851 [2024-12-08 06:27:34.948679] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.948685] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1518690): datao=0, datal=8, cccid=4 00:23:44.851 [2024-12-08 06:27:34.948697] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x157a700) on tqpair(0x1518690): expected_datao=0, payload_size=8 00:23:44.851 [2024-12-08 06:27:34.948727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.948739] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.851 [2024-12-08 06:27:34.948747] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.113 [2024-12-08 06:27:34.993738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.113 [2024-12-08 06:27:34.993756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.113 [2024-12-08 06:27:34.993764] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.113 [2024-12-08 06:27:34.993771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a700) on tqpair=0x1518690 00:23:45.113 ===================================================== 00:23:45.113 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:45.113 ===================================================== 00:23:45.113 Controller Capabilities/Features 00:23:45.113 ================================ 00:23:45.113 Vendor ID: 0000 00:23:45.113 Subsystem Vendor ID: 0000 00:23:45.113 Serial Number: .................... 00:23:45.113 Model Number: ........................................ 00:23:45.113 Firmware Version: 25.01 00:23:45.113 Recommended Arb Burst: 0 00:23:45.113 IEEE OUI Identifier: 00 00 00 00:23:45.113 Multi-path I/O 00:23:45.113 May have multiple subsystem ports: No 00:23:45.113 May have multiple controllers: No 00:23:45.113 Associated with SR-IOV VF: No 00:23:45.113 Max Data Transfer Size: 131072 00:23:45.113 Max Number of Namespaces: 0 00:23:45.113 Max Number of I/O Queues: 1024 00:23:45.113 NVMe Specification Version (VS): 1.3 00:23:45.113 NVMe Specification Version (Identify): 1.3 00:23:45.113 Maximum Queue Entries: 128 00:23:45.113 Contiguous Queues Required: Yes 00:23:45.113 Arbitration Mechanisms Supported 00:23:45.113 Weighted Round Robin: Not Supported 00:23:45.113 Vendor Specific: Not Supported 00:23:45.113 Reset Timeout: 15000 ms 00:23:45.113 Doorbell Stride: 4 bytes 00:23:45.113 NVM Subsystem Reset: Not Supported 00:23:45.113 Command Sets Supported 00:23:45.113 NVM Command Set: Supported 00:23:45.113 Boot Partition: Not Supported 00:23:45.113 Memory Page Size Minimum: 4096 bytes 00:23:45.113 Memory Page Size Maximum: 4096 bytes 00:23:45.113 Persistent Memory Region: Not Supported 00:23:45.113 Optional Asynchronous Events Supported 00:23:45.113 Namespace Attribute Notices: Not Supported 00:23:45.113 Firmware Activation Notices: Not Supported 00:23:45.113 ANA Change Notices: Not Supported 00:23:45.113 PLE Aggregate Log Change Notices: Not Supported 00:23:45.113 LBA Status Info Alert Notices: Not Supported 00:23:45.113 EGE Aggregate Log Change Notices: Not Supported 00:23:45.113 Normal NVM Subsystem Shutdown event: Not Supported 00:23:45.113 Zone Descriptor Change Notices: Not Supported 00:23:45.113 Discovery Log Change Notices: Supported 00:23:45.113 Controller Attributes 00:23:45.113 128-bit Host Identifier: Not Supported 00:23:45.113 Non-Operational Permissive Mode: Not Supported 00:23:45.113 NVM Sets: Not Supported 00:23:45.113 Read Recovery Levels: Not Supported 00:23:45.113 Endurance Groups: Not Supported 00:23:45.113 Predictable Latency Mode: Not Supported 00:23:45.113 Traffic Based Keep ALive: Not Supported 00:23:45.113 Namespace Granularity: Not Supported 00:23:45.113 SQ Associations: Not Supported 00:23:45.113 UUID List: Not Supported 00:23:45.113 Multi-Domain Subsystem: Not Supported 00:23:45.113 Fixed Capacity Management: Not Supported 00:23:45.114 Variable Capacity Management: Not Supported 00:23:45.114 Delete Endurance Group: Not Supported 00:23:45.114 Delete NVM Set: Not Supported 00:23:45.114 Extended LBA Formats Supported: Not Supported 00:23:45.114 Flexible Data Placement Supported: Not Supported 00:23:45.114 00:23:45.114 Controller Memory Buffer Support 00:23:45.114 ================================ 00:23:45.114 Supported: No 00:23:45.114 00:23:45.114 Persistent Memory Region Support 00:23:45.114 ================================ 00:23:45.114 Supported: No 00:23:45.114 00:23:45.114 Admin Command Set Attributes 00:23:45.114 ============================ 00:23:45.114 Security Send/Receive: Not Supported 00:23:45.114 Format NVM: Not Supported 00:23:45.114 Firmware Activate/Download: Not Supported 00:23:45.114 Namespace Management: Not Supported 00:23:45.114 Device Self-Test: Not Supported 00:23:45.114 Directives: Not Supported 00:23:45.114 NVMe-MI: Not Supported 00:23:45.114 Virtualization Management: Not Supported 00:23:45.114 Doorbell Buffer Config: Not Supported 00:23:45.114 Get LBA Status Capability: Not Supported 00:23:45.114 Command & Feature Lockdown Capability: Not Supported 00:23:45.114 Abort Command Limit: 1 00:23:45.114 Async Event Request Limit: 4 00:23:45.114 Number of Firmware Slots: N/A 00:23:45.114 Firmware Slot 1 Read-Only: N/A 00:23:45.114 Firmware Activation Without Reset: N/A 00:23:45.114 Multiple Update Detection Support: N/A 00:23:45.114 Firmware Update Granularity: No Information Provided 00:23:45.114 Per-Namespace SMART Log: No 00:23:45.114 Asymmetric Namespace Access Log Page: Not Supported 00:23:45.114 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:45.114 Command Effects Log Page: Not Supported 00:23:45.114 Get Log Page Extended Data: Supported 00:23:45.114 Telemetry Log Pages: Not Supported 00:23:45.114 Persistent Event Log Pages: Not Supported 00:23:45.114 Supported Log Pages Log Page: May Support 00:23:45.114 Commands Supported & Effects Log Page: Not Supported 00:23:45.114 Feature Identifiers & Effects Log Page:May Support 00:23:45.114 NVMe-MI Commands & Effects Log Page: May Support 00:23:45.114 Data Area 4 for Telemetry Log: Not Supported 00:23:45.114 Error Log Page Entries Supported: 128 00:23:45.114 Keep Alive: Not Supported 00:23:45.114 00:23:45.114 NVM Command Set Attributes 00:23:45.114 ========================== 00:23:45.114 Submission Queue Entry Size 00:23:45.114 Max: 1 00:23:45.114 Min: 1 00:23:45.114 Completion Queue Entry Size 00:23:45.114 Max: 1 00:23:45.114 Min: 1 00:23:45.114 Number of Namespaces: 0 00:23:45.114 Compare Command: Not Supported 00:23:45.114 Write Uncorrectable Command: Not Supported 00:23:45.114 Dataset Management Command: Not Supported 00:23:45.114 Write Zeroes Command: Not Supported 00:23:45.114 Set Features Save Field: Not Supported 00:23:45.114 Reservations: Not Supported 00:23:45.114 Timestamp: Not Supported 00:23:45.114 Copy: Not Supported 00:23:45.114 Volatile Write Cache: Not Present 00:23:45.114 Atomic Write Unit (Normal): 1 00:23:45.114 Atomic Write Unit (PFail): 1 00:23:45.114 Atomic Compare & Write Unit: 1 00:23:45.114 Fused Compare & Write: Supported 00:23:45.114 Scatter-Gather List 00:23:45.114 SGL Command Set: Supported 00:23:45.114 SGL Keyed: Supported 00:23:45.114 SGL Bit Bucket Descriptor: Not Supported 00:23:45.114 SGL Metadata Pointer: Not Supported 00:23:45.114 Oversized SGL: Not Supported 00:23:45.114 SGL Metadata Address: Not Supported 00:23:45.114 SGL Offset: Supported 00:23:45.114 Transport SGL Data Block: Not Supported 00:23:45.114 Replay Protected Memory Block: Not Supported 00:23:45.114 00:23:45.114 Firmware Slot Information 00:23:45.114 ========================= 00:23:45.114 Active slot: 0 00:23:45.114 00:23:45.114 00:23:45.114 Error Log 00:23:45.114 ========= 00:23:45.114 00:23:45.114 Active Namespaces 00:23:45.114 ================= 00:23:45.114 Discovery Log Page 00:23:45.114 ================== 00:23:45.114 Generation Counter: 2 00:23:45.114 Number of Records: 2 00:23:45.114 Record Format: 0 00:23:45.114 00:23:45.114 Discovery Log Entry 0 00:23:45.114 ---------------------- 00:23:45.114 Transport Type: 3 (TCP) 00:23:45.114 Address Family: 1 (IPv4) 00:23:45.114 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:45.114 Entry Flags: 00:23:45.114 Duplicate Returned Information: 1 00:23:45.114 Explicit Persistent Connection Support for Discovery: 1 00:23:45.114 Transport Requirements: 00:23:45.114 Secure Channel: Not Required 00:23:45.114 Port ID: 0 (0x0000) 00:23:45.114 Controller ID: 65535 (0xffff) 00:23:45.114 Admin Max SQ Size: 128 00:23:45.114 Transport Service Identifier: 4420 00:23:45.114 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:45.114 Transport Address: 10.0.0.2 00:23:45.114 Discovery Log Entry 1 00:23:45.114 ---------------------- 00:23:45.114 Transport Type: 3 (TCP) 00:23:45.114 Address Family: 1 (IPv4) 00:23:45.114 Subsystem Type: 2 (NVM Subsystem) 00:23:45.114 Entry Flags: 00:23:45.114 Duplicate Returned Information: 0 00:23:45.114 Explicit Persistent Connection Support for Discovery: 0 00:23:45.114 Transport Requirements: 00:23:45.114 Secure Channel: Not Required 00:23:45.114 Port ID: 0 (0x0000) 00:23:45.114 Controller ID: 65535 (0xffff) 00:23:45.114 Admin Max SQ Size: 128 00:23:45.114 Transport Service Identifier: 4420 00:23:45.114 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:45.114 Transport Address: 10.0.0.2 [2024-12-08 06:27:34.993901] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:45.114 [2024-12-08 06:27:34.993935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a100) on tqpair=0x1518690 00:23:45.114 [2024-12-08 06:27:34.993949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.114 [2024-12-08 06:27:34.993957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a280) on tqpair=0x1518690 00:23:45.114 [2024-12-08 06:27:34.993965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.114 [2024-12-08 06:27:34.993973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a400) on tqpair=0x1518690 00:23:45.114 [2024-12-08 06:27:34.993980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.114 [2024-12-08 06:27:34.993988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.114 [2024-12-08 06:27:34.993995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.114 [2024-12-08 06:27:34.994027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.114 [2024-12-08 06:27:34.994036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.114 [2024-12-08 06:27:34.994042] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.114 [2024-12-08 06:27:34.994053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.114 [2024-12-08 06:27:34.994078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.114 [2024-12-08 06:27:34.994217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.114 [2024-12-08 06:27:34.994231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.114 [2024-12-08 06:27:34.994238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.114 [2024-12-08 06:27:34.994245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.114 [2024-12-08 06:27:34.994256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.114 [2024-12-08 06:27:34.994264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.114 [2024-12-08 06:27:34.994270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.114 [2024-12-08 06:27:34.994280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.114 [2024-12-08 06:27:34.994306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.114 [2024-12-08 06:27:34.994402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.114 [2024-12-08 06:27:34.994414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.114 [2024-12-08 06:27:34.994421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.114 [2024-12-08 06:27:34.994427] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.114 [2024-12-08 06:27:34.994440] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:45.114 [2024-12-08 06:27:34.994449] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:45.114 [2024-12-08 06:27:34.994465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.114 [2024-12-08 06:27:34.994473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.114 [2024-12-08 06:27:34.994479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.114 [2024-12-08 06:27:34.994488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.114 [2024-12-08 06:27:34.994508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.114 [2024-12-08 06:27:34.994591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.115 [2024-12-08 06:27:34.994605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.115 [2024-12-08 06:27:34.994612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.994618] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.115 [2024-12-08 06:27:34.994635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.994644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.994650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.115 [2024-12-08 06:27:34.994660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.115 [2024-12-08 06:27:34.994680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.115 [2024-12-08 06:27:34.994785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.115 [2024-12-08 06:27:34.994801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.115 [2024-12-08 06:27:34.994808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.994815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.115 [2024-12-08 06:27:34.994831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.994840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.994847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.115 [2024-12-08 06:27:34.994857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.115 [2024-12-08 06:27:34.994879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.115 [2024-12-08 06:27:34.994967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.115 [2024-12-08 06:27:34.994981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.115 [2024-12-08 06:27:34.994988] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.994995] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.115 [2024-12-08 06:27:34.995012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.995036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.995042] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.115 [2024-12-08 06:27:34.995052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.115 [2024-12-08 06:27:34.995074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.115 [2024-12-08 06:27:34.995161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.115 [2024-12-08 06:27:34.995173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.115 [2024-12-08 06:27:34.995184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.995191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.115 [2024-12-08 06:27:34.995207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.995215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.995221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.115 [2024-12-08 06:27:34.995231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.115 [2024-12-08 06:27:34.995251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.115 [2024-12-08 06:27:34.995330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.115 [2024-12-08 06:27:34.995343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.115 [2024-12-08 06:27:34.995350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.995356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.115 [2024-12-08 06:27:34.995371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.995380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.995386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.115 [2024-12-08 06:27:34.995396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.115 [2024-12-08 06:27:34.995416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.115 [2024-12-08 06:27:34.995491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.115 [2024-12-08 06:27:34.995504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.115 [2024-12-08 06:27:34.995511] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.995517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.115 [2024-12-08 06:27:34.995532] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.995541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.995547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.115 [2024-12-08 06:27:34.995556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.115 [2024-12-08 06:27:34.995576] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.115 [2024-12-08 06:27:34.995651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.115 [2024-12-08 06:27:34.995664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.115 [2024-12-08 06:27:34.995671] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.995677] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.115 [2024-12-08 06:27:34.995692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.995701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.995707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.115 [2024-12-08 06:27:34.995716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.115 [2024-12-08 06:27:34.995763] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.115 [2024-12-08 06:27:34.995847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.115 [2024-12-08 06:27:34.995861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.115 [2024-12-08 06:27:34.995868] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.995878] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.115 [2024-12-08 06:27:34.995896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.995905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.995911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.115 [2024-12-08 06:27:34.995921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.115 [2024-12-08 06:27:34.995941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.115 [2024-12-08 06:27:34.996020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.115 [2024-12-08 06:27:34.996049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.115 [2024-12-08 06:27:34.996056] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.996062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.115 [2024-12-08 06:27:34.996079] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.996087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.996093] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.115 [2024-12-08 06:27:34.996103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.115 [2024-12-08 06:27:34.996123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.115 [2024-12-08 06:27:34.996198] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.115 [2024-12-08 06:27:34.996211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.115 [2024-12-08 06:27:34.996218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.996224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.115 [2024-12-08 06:27:34.996239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.996248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.996254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.115 [2024-12-08 06:27:34.996264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.115 [2024-12-08 06:27:34.996284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.115 [2024-12-08 06:27:34.996358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.115 [2024-12-08 06:27:34.996371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.115 [2024-12-08 06:27:34.996378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.996384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.115 [2024-12-08 06:27:34.996399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.996408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.996414] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.115 [2024-12-08 06:27:34.996423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.115 [2024-12-08 06:27:34.996444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.115 [2024-12-08 06:27:34.996518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.115 [2024-12-08 06:27:34.996531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.115 [2024-12-08 06:27:34.996537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.996544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.115 [2024-12-08 06:27:34.996563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.996572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.115 [2024-12-08 06:27:34.996578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.115 [2024-12-08 06:27:34.996588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.115 [2024-12-08 06:27:34.996608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.116 [2024-12-08 06:27:34.996681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.116 [2024-12-08 06:27:34.996694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.116 [2024-12-08 06:27:34.996716] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:34.996730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.116 [2024-12-08 06:27:34.996748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:34.996757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:34.996763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.116 [2024-12-08 06:27:34.996773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.116 [2024-12-08 06:27:34.996795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.116 [2024-12-08 06:27:34.996876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.116 [2024-12-08 06:27:34.996889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.116 [2024-12-08 06:27:34.996896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:34.996903] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.116 [2024-12-08 06:27:34.996919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:34.996927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:34.996934] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.116 [2024-12-08 06:27:34.996944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.116 [2024-12-08 06:27:34.996964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.116 [2024-12-08 06:27:34.997055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.116 [2024-12-08 06:27:34.997069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.116 [2024-12-08 06:27:34.997075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:34.997082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.116 [2024-12-08 06:27:34.997097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:34.997106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:34.997112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.116 [2024-12-08 06:27:34.997121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.116 [2024-12-08 06:27:34.997141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.116 [2024-12-08 06:27:34.997221] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.116 [2024-12-08 06:27:34.997232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.116 [2024-12-08 06:27:34.997239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:34.997245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.116 [2024-12-08 06:27:34.997261] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:34.997273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:34.997280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.116 [2024-12-08 06:27:34.997289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.116 [2024-12-08 06:27:34.997309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.116 [2024-12-08 06:27:34.997386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.116 [2024-12-08 06:27:34.997397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.116 [2024-12-08 06:27:34.997404] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:34.997410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.116 [2024-12-08 06:27:34.997425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:34.997434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:34.997440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.116 [2024-12-08 06:27:34.997449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.116 [2024-12-08 06:27:34.997470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.116 [2024-12-08 06:27:34.997542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.116 [2024-12-08 06:27:34.997553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.116 [2024-12-08 06:27:34.997560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:34.997566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.116 [2024-12-08 06:27:34.997582] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:34.997590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:34.997597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.116 [2024-12-08 06:27:34.997606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.116 [2024-12-08 06:27:34.997626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.116 [2024-12-08 06:27:34.997697] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.116 [2024-12-08 06:27:35.001734] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.116 [2024-12-08 06:27:35.001746] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:35.001753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.116 [2024-12-08 06:27:35.001771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:35.001780] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:35.001786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1518690) 00:23:45.116 [2024-12-08 06:27:35.001797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.116 [2024-12-08 06:27:35.001819] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x157a580, cid 3, qid 0 00:23:45.116 [2024-12-08 06:27:35.001975] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.116 [2024-12-08 06:27:35.001987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.116 [2024-12-08 06:27:35.001994] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:35.002016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x157a580) on tqpair=0x1518690 00:23:45.116 [2024-12-08 06:27:35.002029] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:23:45.116 00:23:45.116 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:45.116 [2024-12-08 06:27:35.038629] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:23:45.116 [2024-12-08 06:27:35.038675] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127298 ] 00:23:45.116 [2024-12-08 06:27:35.090253] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:45.116 [2024-12-08 06:27:35.090309] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:45.116 [2024-12-08 06:27:35.090320] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:45.116 [2024-12-08 06:27:35.090339] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:45.116 [2024-12-08 06:27:35.090352] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:45.116 [2024-12-08 06:27:35.090916] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:45.116 [2024-12-08 06:27:35.090959] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1320690 0 00:23:45.116 [2024-12-08 06:27:35.096731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:45.116 [2024-12-08 06:27:35.096751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:45.116 [2024-12-08 06:27:35.096764] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:45.116 [2024-12-08 06:27:35.096772] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:45.116 [2024-12-08 06:27:35.096804] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:35.096816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:35.096823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1320690) 00:23:45.116 [2024-12-08 06:27:35.096838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:45.116 [2024-12-08 06:27:35.096866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382100, cid 0, qid 0 00:23:45.116 [2024-12-08 06:27:35.104734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.116 [2024-12-08 06:27:35.104752] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.116 [2024-12-08 06:27:35.104760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:35.104768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382100) on tqpair=0x1320690 00:23:45.116 [2024-12-08 06:27:35.104783] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:45.116 [2024-12-08 06:27:35.104794] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:45.116 [2024-12-08 06:27:35.104804] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:45.116 [2024-12-08 06:27:35.104824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:35.104833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.116 [2024-12-08 06:27:35.104840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1320690) 00:23:45.116 [2024-12-08 06:27:35.104851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.116 [2024-12-08 06:27:35.104876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382100, cid 0, qid 0 00:23:45.116 [2024-12-08 06:27:35.105066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.116 [2024-12-08 06:27:35.105096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.116 [2024-12-08 06:27:35.105103] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.105110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382100) on tqpair=0x1320690 00:23:45.117 [2024-12-08 06:27:35.105121] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:45.117 [2024-12-08 06:27:35.105135] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:45.117 [2024-12-08 06:27:35.105147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.105155] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.105161] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1320690) 00:23:45.117 [2024-12-08 06:27:35.105171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.117 [2024-12-08 06:27:35.105193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382100, cid 0, qid 0 00:23:45.117 [2024-12-08 06:27:35.105332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.117 [2024-12-08 06:27:35.105345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.117 [2024-12-08 06:27:35.105352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.105358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382100) on tqpair=0x1320690 00:23:45.117 [2024-12-08 06:27:35.105366] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:45.117 [2024-12-08 06:27:35.105380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:45.117 [2024-12-08 06:27:35.105392] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.105399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.105405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1320690) 00:23:45.117 [2024-12-08 06:27:35.105415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.117 [2024-12-08 06:27:35.105437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382100, cid 0, qid 0 00:23:45.117 [2024-12-08 06:27:35.105523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.117 [2024-12-08 06:27:35.105536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.117 [2024-12-08 06:27:35.105543] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.105549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382100) on tqpair=0x1320690 00:23:45.117 [2024-12-08 06:27:35.105557] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:45.117 [2024-12-08 06:27:35.105574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.105583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.105589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1320690) 00:23:45.117 [2024-12-08 06:27:35.105599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.117 [2024-12-08 06:27:35.105620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382100, cid 0, qid 0 00:23:45.117 [2024-12-08 06:27:35.105750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.117 [2024-12-08 06:27:35.105764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.117 [2024-12-08 06:27:35.105771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.105782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382100) on tqpair=0x1320690 00:23:45.117 [2024-12-08 06:27:35.105789] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:45.117 [2024-12-08 06:27:35.105798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:45.117 [2024-12-08 06:27:35.105812] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:45.117 [2024-12-08 06:27:35.105922] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:45.117 [2024-12-08 06:27:35.105930] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:45.117 [2024-12-08 06:27:35.105942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.105949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.105956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1320690) 00:23:45.117 [2024-12-08 06:27:35.105966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.117 [2024-12-08 06:27:35.105988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382100, cid 0, qid 0 00:23:45.117 [2024-12-08 06:27:35.106170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.117 [2024-12-08 06:27:35.106184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.117 [2024-12-08 06:27:35.106190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.106197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382100) on tqpair=0x1320690 00:23:45.117 [2024-12-08 06:27:35.106204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:45.117 [2024-12-08 06:27:35.106221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.106230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.106236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1320690) 00:23:45.117 [2024-12-08 06:27:35.106246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.117 [2024-12-08 06:27:35.106267] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382100, cid 0, qid 0 00:23:45.117 [2024-12-08 06:27:35.106355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.117 [2024-12-08 06:27:35.106368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.117 [2024-12-08 06:27:35.106374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.106381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382100) on tqpair=0x1320690 00:23:45.117 [2024-12-08 06:27:35.106388] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:45.117 [2024-12-08 06:27:35.106396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:45.117 [2024-12-08 06:27:35.106410] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:45.117 [2024-12-08 06:27:35.106423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:45.117 [2024-12-08 06:27:35.106436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.106444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1320690) 00:23:45.117 [2024-12-08 06:27:35.106457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.117 [2024-12-08 06:27:35.106479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382100, cid 0, qid 0 00:23:45.117 [2024-12-08 06:27:35.106620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.117 [2024-12-08 06:27:35.106635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.117 [2024-12-08 06:27:35.106642] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.106648] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1320690): datao=0, datal=4096, cccid=0 00:23:45.117 [2024-12-08 06:27:35.106655] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1382100) on tqpair(0x1320690): expected_datao=0, payload_size=4096 00:23:45.117 [2024-12-08 06:27:35.106662] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.106672] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.106678] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.106712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.117 [2024-12-08 06:27:35.106733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.117 [2024-12-08 06:27:35.106741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.106748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382100) on tqpair=0x1320690 00:23:45.117 [2024-12-08 06:27:35.106766] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:45.117 [2024-12-08 06:27:35.106775] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:45.117 [2024-12-08 06:27:35.106783] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:45.117 [2024-12-08 06:27:35.106789] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:45.117 [2024-12-08 06:27:35.106796] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:45.117 [2024-12-08 06:27:35.106804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:45.117 [2024-12-08 06:27:35.106820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:45.117 [2024-12-08 06:27:35.106832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.106839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.106845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1320690) 00:23:45.117 [2024-12-08 06:27:35.106856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:45.117 [2024-12-08 06:27:35.106878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382100, cid 0, qid 0 00:23:45.117 [2024-12-08 06:27:35.107060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.117 [2024-12-08 06:27:35.107074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.117 [2024-12-08 06:27:35.107080] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.107087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382100) on tqpair=0x1320690 00:23:45.117 [2024-12-08 06:27:35.107096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.107103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.107109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1320690) 00:23:45.117 [2024-12-08 06:27:35.107119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.117 [2024-12-08 06:27:35.107132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.107139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.117 [2024-12-08 06:27:35.107145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1320690) 00:23:45.118 [2024-12-08 06:27:35.107153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.118 [2024-12-08 06:27:35.107163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.107169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.107175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1320690) 00:23:45.118 [2024-12-08 06:27:35.107183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.118 [2024-12-08 06:27:35.107192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.107198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.107204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1320690) 00:23:45.118 [2024-12-08 06:27:35.107212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.118 [2024-12-08 06:27:35.107221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:45.118 [2024-12-08 06:27:35.107240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:45.118 [2024-12-08 06:27:35.107252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.107259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1320690) 00:23:45.118 [2024-12-08 06:27:35.107269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.118 [2024-12-08 06:27:35.107291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382100, cid 0, qid 0 00:23:45.118 [2024-12-08 06:27:35.107302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382280, cid 1, qid 0 00:23:45.118 [2024-12-08 06:27:35.107309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382400, cid 2, qid 0 00:23:45.118 [2024-12-08 06:27:35.107316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382580, cid 3, qid 0 00:23:45.118 [2024-12-08 06:27:35.107323] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382700, cid 4, qid 0 00:23:45.118 [2024-12-08 06:27:35.107502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.118 [2024-12-08 06:27:35.107515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.118 [2024-12-08 06:27:35.107522] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.107528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382700) on tqpair=0x1320690 00:23:45.118 [2024-12-08 06:27:35.107535] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:45.118 [2024-12-08 06:27:35.107544] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:45.118 [2024-12-08 06:27:35.107559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:45.118 [2024-12-08 06:27:35.107570] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:45.118 [2024-12-08 06:27:35.107580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.107587] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.107593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1320690) 00:23:45.118 [2024-12-08 06:27:35.107607] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:45.118 [2024-12-08 06:27:35.107629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382700, cid 4, qid 0 00:23:45.118 [2024-12-08 06:27:35.107811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.118 [2024-12-08 06:27:35.107827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.118 [2024-12-08 06:27:35.107834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.107840] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382700) on tqpair=0x1320690 00:23:45.118 [2024-12-08 06:27:35.107908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:45.118 [2024-12-08 06:27:35.107930] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:45.118 [2024-12-08 06:27:35.107946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.107954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1320690) 00:23:45.118 [2024-12-08 06:27:35.107964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.118 [2024-12-08 06:27:35.107986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382700, cid 4, qid 0 00:23:45.118 [2024-12-08 06:27:35.108139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.118 [2024-12-08 06:27:35.108154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.118 [2024-12-08 06:27:35.108160] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.108166] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1320690): datao=0, datal=4096, cccid=4 00:23:45.118 [2024-12-08 06:27:35.108173] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1382700) on tqpair(0x1320690): expected_datao=0, payload_size=4096 00:23:45.118 [2024-12-08 06:27:35.108180] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.108198] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.108206] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.108250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.118 [2024-12-08 06:27:35.108263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.118 [2024-12-08 06:27:35.108270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.108276] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382700) on tqpair=0x1320690 00:23:45.118 [2024-12-08 06:27:35.108294] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:45.118 [2024-12-08 06:27:35.108317] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:45.118 [2024-12-08 06:27:35.108337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:45.118 [2024-12-08 06:27:35.108350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.108357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1320690) 00:23:45.118 [2024-12-08 06:27:35.108367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.118 [2024-12-08 06:27:35.108388] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382700, cid 4, qid 0 00:23:45.118 [2024-12-08 06:27:35.108513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.118 [2024-12-08 06:27:35.108526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.118 [2024-12-08 06:27:35.108536] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.108543] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1320690): datao=0, datal=4096, cccid=4 00:23:45.118 [2024-12-08 06:27:35.108550] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1382700) on tqpair(0x1320690): expected_datao=0, payload_size=4096 00:23:45.118 [2024-12-08 06:27:35.108557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.108577] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.108585] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.108638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.118 [2024-12-08 06:27:35.108651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.118 [2024-12-08 06:27:35.108658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.108664] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382700) on tqpair=0x1320690 00:23:45.118 [2024-12-08 06:27:35.108689] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:45.118 [2024-12-08 06:27:35.112729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:45.118 [2024-12-08 06:27:35.112750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.112758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1320690) 00:23:45.118 [2024-12-08 06:27:35.112785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.118 [2024-12-08 06:27:35.112810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382700, cid 4, qid 0 00:23:45.118 [2024-12-08 06:27:35.112973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.118 [2024-12-08 06:27:35.112988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.118 [2024-12-08 06:27:35.112995] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.113001] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1320690): datao=0, datal=4096, cccid=4 00:23:45.118 [2024-12-08 06:27:35.113008] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1382700) on tqpair(0x1320690): expected_datao=0, payload_size=4096 00:23:45.118 [2024-12-08 06:27:35.113030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.118 [2024-12-08 06:27:35.113049] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.113058] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.113194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.119 [2024-12-08 06:27:35.113207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.119 [2024-12-08 06:27:35.113214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.113220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382700) on tqpair=0x1320690 00:23:45.119 [2024-12-08 06:27:35.113234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:45.119 [2024-12-08 06:27:35.113249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:45.119 [2024-12-08 06:27:35.113265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:45.119 [2024-12-08 06:27:35.113279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:45.119 [2024-12-08 06:27:35.113289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:45.119 [2024-12-08 06:27:35.113300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:45.119 [2024-12-08 06:27:35.113310] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:45.119 [2024-12-08 06:27:35.113318] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:45.119 [2024-12-08 06:27:35.113326] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:45.119 [2024-12-08 06:27:35.113345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.113353] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1320690) 00:23:45.119 [2024-12-08 06:27:35.113363] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.119 [2024-12-08 06:27:35.113373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.113380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.113386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1320690) 00:23:45.119 [2024-12-08 06:27:35.113394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.119 [2024-12-08 06:27:35.113419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382700, cid 4, qid 0 00:23:45.119 [2024-12-08 06:27:35.113430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382880, cid 5, qid 0 00:23:45.119 [2024-12-08 06:27:35.113581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.119 [2024-12-08 06:27:35.113595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.119 [2024-12-08 06:27:35.113601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.113608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382700) on tqpair=0x1320690 00:23:45.119 [2024-12-08 06:27:35.113618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.119 [2024-12-08 06:27:35.113627] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.119 [2024-12-08 06:27:35.113633] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.113639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382880) on tqpair=0x1320690 00:23:45.119 [2024-12-08 06:27:35.113654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.113662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1320690) 00:23:45.119 [2024-12-08 06:27:35.113672] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.119 [2024-12-08 06:27:35.113693] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382880, cid 5, qid 0 00:23:45.119 [2024-12-08 06:27:35.113845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.119 [2024-12-08 06:27:35.113861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.119 [2024-12-08 06:27:35.113868] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.113874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382880) on tqpair=0x1320690 00:23:45.119 [2024-12-08 06:27:35.113891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.113900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1320690) 00:23:45.119 [2024-12-08 06:27:35.113910] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.119 [2024-12-08 06:27:35.113932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382880, cid 5, qid 0 00:23:45.119 [2024-12-08 06:27:35.114048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.119 [2024-12-08 06:27:35.114079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.119 [2024-12-08 06:27:35.114088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382880) on tqpair=0x1320690 00:23:45.119 [2024-12-08 06:27:35.114113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114121] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1320690) 00:23:45.119 [2024-12-08 06:27:35.114131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.119 [2024-12-08 06:27:35.114153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382880, cid 5, qid 0 00:23:45.119 [2024-12-08 06:27:35.114249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.119 [2024-12-08 06:27:35.114263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.119 [2024-12-08 06:27:35.114270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382880) on tqpair=0x1320690 00:23:45.119 [2024-12-08 06:27:35.114300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1320690) 00:23:45.119 [2024-12-08 06:27:35.114322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.119 [2024-12-08 06:27:35.114334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1320690) 00:23:45.119 [2024-12-08 06:27:35.114350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.119 [2024-12-08 06:27:35.114362] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1320690) 00:23:45.119 [2024-12-08 06:27:35.114379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.119 [2024-12-08 06:27:35.114390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1320690) 00:23:45.119 [2024-12-08 06:27:35.114406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.119 [2024-12-08 06:27:35.114428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382880, cid 5, qid 0 00:23:45.119 [2024-12-08 06:27:35.114439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382700, cid 4, qid 0 00:23:45.119 [2024-12-08 06:27:35.114447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382a00, cid 6, qid 0 00:23:45.119 [2024-12-08 06:27:35.114454] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382b80, cid 7, qid 0 00:23:45.119 [2024-12-08 06:27:35.114690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.119 [2024-12-08 06:27:35.114728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.119 [2024-12-08 06:27:35.114737] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114744] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1320690): datao=0, datal=8192, cccid=5 00:23:45.119 [2024-12-08 06:27:35.114751] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1382880) on tqpair(0x1320690): expected_datao=0, payload_size=8192 00:23:45.119 [2024-12-08 06:27:35.114759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114806] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114819] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.119 [2024-12-08 06:27:35.114837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.119 [2024-12-08 06:27:35.114844] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114851] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1320690): datao=0, datal=512, cccid=4 00:23:45.119 [2024-12-08 06:27:35.114859] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1382700) on tqpair(0x1320690): expected_datao=0, payload_size=512 00:23:45.119 [2024-12-08 06:27:35.114866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114876] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114882] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114891] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.119 [2024-12-08 06:27:35.114899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.119 [2024-12-08 06:27:35.114906] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114912] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1320690): datao=0, datal=512, cccid=6 00:23:45.119 [2024-12-08 06:27:35.114919] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1382a00) on tqpair(0x1320690): expected_datao=0, payload_size=512 00:23:45.119 [2024-12-08 06:27:35.114927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114936] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114942] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:45.119 [2024-12-08 06:27:35.114959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:45.119 [2024-12-08 06:27:35.114966] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:45.119 [2024-12-08 06:27:35.114972] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1320690): datao=0, datal=4096, cccid=7 00:23:45.119 [2024-12-08 06:27:35.114979] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1382b80) on tqpair(0x1320690): expected_datao=0, payload_size=4096 00:23:45.119 [2024-12-08 06:27:35.114986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.120 [2024-12-08 06:27:35.115022] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:45.120 [2024-12-08 06:27:35.115032] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:45.120 [2024-12-08 06:27:35.155956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.120 [2024-12-08 06:27:35.155976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.120 [2024-12-08 06:27:35.155984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.120 [2024-12-08 06:27:35.155991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382880) on tqpair=0x1320690 00:23:45.120 [2024-12-08 06:27:35.156024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.120 [2024-12-08 06:27:35.156035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.120 [2024-12-08 06:27:35.156042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.120 [2024-12-08 06:27:35.156048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382700) on tqpair=0x1320690 00:23:45.120 [2024-12-08 06:27:35.156063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.120 [2024-12-08 06:27:35.156073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.120 [2024-12-08 06:27:35.156079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.120 [2024-12-08 06:27:35.156085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382a00) on tqpair=0x1320690 00:23:45.120 [2024-12-08 06:27:35.156095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.120 [2024-12-08 06:27:35.156107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.120 [2024-12-08 06:27:35.156115] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.120 [2024-12-08 06:27:35.156121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382b80) on tqpair=0x1320690 00:23:45.120 ===================================================== 00:23:45.120 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:45.120 ===================================================== 00:23:45.120 Controller Capabilities/Features 00:23:45.120 ================================ 00:23:45.120 Vendor ID: 8086 00:23:45.120 Subsystem Vendor ID: 8086 00:23:45.120 Serial Number: SPDK00000000000001 00:23:45.120 Model Number: SPDK bdev Controller 00:23:45.120 Firmware Version: 25.01 00:23:45.120 Recommended Arb Burst: 6 00:23:45.120 IEEE OUI Identifier: e4 d2 5c 00:23:45.120 Multi-path I/O 00:23:45.120 May have multiple subsystem ports: Yes 00:23:45.120 May have multiple controllers: Yes 00:23:45.120 Associated with SR-IOV VF: No 00:23:45.120 Max Data Transfer Size: 131072 00:23:45.120 Max Number of Namespaces: 32 00:23:45.120 Max Number of I/O Queues: 127 00:23:45.120 NVMe Specification Version (VS): 1.3 00:23:45.120 NVMe Specification Version (Identify): 1.3 00:23:45.120 Maximum Queue Entries: 128 00:23:45.120 Contiguous Queues Required: Yes 00:23:45.120 Arbitration Mechanisms Supported 00:23:45.120 Weighted Round Robin: Not Supported 00:23:45.120 Vendor Specific: Not Supported 00:23:45.120 Reset Timeout: 15000 ms 00:23:45.120 Doorbell Stride: 4 bytes 00:23:45.120 NVM Subsystem Reset: Not Supported 00:23:45.120 Command Sets Supported 00:23:45.120 NVM Command Set: Supported 00:23:45.120 Boot Partition: Not Supported 00:23:45.120 Memory Page Size Minimum: 4096 bytes 00:23:45.120 Memory Page Size Maximum: 4096 bytes 00:23:45.120 Persistent Memory Region: Not Supported 00:23:45.120 Optional Asynchronous Events Supported 00:23:45.120 Namespace Attribute Notices: Supported 00:23:45.120 Firmware Activation Notices: Not Supported 00:23:45.120 ANA Change Notices: Not Supported 00:23:45.120 PLE Aggregate Log Change Notices: Not Supported 00:23:45.120 LBA Status Info Alert Notices: Not Supported 00:23:45.120 EGE Aggregate Log Change Notices: Not Supported 00:23:45.120 Normal NVM Subsystem Shutdown event: Not Supported 00:23:45.120 Zone Descriptor Change Notices: Not Supported 00:23:45.120 Discovery Log Change Notices: Not Supported 00:23:45.120 Controller Attributes 00:23:45.120 128-bit Host Identifier: Supported 00:23:45.120 Non-Operational Permissive Mode: Not Supported 00:23:45.120 NVM Sets: Not Supported 00:23:45.120 Read Recovery Levels: Not Supported 00:23:45.120 Endurance Groups: Not Supported 00:23:45.120 Predictable Latency Mode: Not Supported 00:23:45.120 Traffic Based Keep ALive: Not Supported 00:23:45.120 Namespace Granularity: Not Supported 00:23:45.120 SQ Associations: Not Supported 00:23:45.120 UUID List: Not Supported 00:23:45.120 Multi-Domain Subsystem: Not Supported 00:23:45.120 Fixed Capacity Management: Not Supported 00:23:45.120 Variable Capacity Management: Not Supported 00:23:45.120 Delete Endurance Group: Not Supported 00:23:45.120 Delete NVM Set: Not Supported 00:23:45.120 Extended LBA Formats Supported: Not Supported 00:23:45.120 Flexible Data Placement Supported: Not Supported 00:23:45.120 00:23:45.120 Controller Memory Buffer Support 00:23:45.120 ================================ 00:23:45.120 Supported: No 00:23:45.120 00:23:45.120 Persistent Memory Region Support 00:23:45.120 ================================ 00:23:45.120 Supported: No 00:23:45.120 00:23:45.120 Admin Command Set Attributes 00:23:45.120 ============================ 00:23:45.120 Security Send/Receive: Not Supported 00:23:45.120 Format NVM: Not Supported 00:23:45.120 Firmware Activate/Download: Not Supported 00:23:45.120 Namespace Management: Not Supported 00:23:45.120 Device Self-Test: Not Supported 00:23:45.120 Directives: Not Supported 00:23:45.120 NVMe-MI: Not Supported 00:23:45.120 Virtualization Management: Not Supported 00:23:45.120 Doorbell Buffer Config: Not Supported 00:23:45.120 Get LBA Status Capability: Not Supported 00:23:45.120 Command & Feature Lockdown Capability: Not Supported 00:23:45.120 Abort Command Limit: 4 00:23:45.120 Async Event Request Limit: 4 00:23:45.120 Number of Firmware Slots: N/A 00:23:45.120 Firmware Slot 1 Read-Only: N/A 00:23:45.120 Firmware Activation Without Reset: N/A 00:23:45.120 Multiple Update Detection Support: N/A 00:23:45.120 Firmware Update Granularity: No Information Provided 00:23:45.120 Per-Namespace SMART Log: No 00:23:45.120 Asymmetric Namespace Access Log Page: Not Supported 00:23:45.120 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:45.120 Command Effects Log Page: Supported 00:23:45.120 Get Log Page Extended Data: Supported 00:23:45.120 Telemetry Log Pages: Not Supported 00:23:45.120 Persistent Event Log Pages: Not Supported 00:23:45.120 Supported Log Pages Log Page: May Support 00:23:45.120 Commands Supported & Effects Log Page: Not Supported 00:23:45.120 Feature Identifiers & Effects Log Page:May Support 00:23:45.120 NVMe-MI Commands & Effects Log Page: May Support 00:23:45.120 Data Area 4 for Telemetry Log: Not Supported 00:23:45.120 Error Log Page Entries Supported: 128 00:23:45.120 Keep Alive: Supported 00:23:45.120 Keep Alive Granularity: 10000 ms 00:23:45.120 00:23:45.120 NVM Command Set Attributes 00:23:45.120 ========================== 00:23:45.120 Submission Queue Entry Size 00:23:45.120 Max: 64 00:23:45.120 Min: 64 00:23:45.120 Completion Queue Entry Size 00:23:45.120 Max: 16 00:23:45.120 Min: 16 00:23:45.120 Number of Namespaces: 32 00:23:45.120 Compare Command: Supported 00:23:45.120 Write Uncorrectable Command: Not Supported 00:23:45.120 Dataset Management Command: Supported 00:23:45.120 Write Zeroes Command: Supported 00:23:45.120 Set Features Save Field: Not Supported 00:23:45.120 Reservations: Supported 00:23:45.120 Timestamp: Not Supported 00:23:45.120 Copy: Supported 00:23:45.120 Volatile Write Cache: Present 00:23:45.120 Atomic Write Unit (Normal): 1 00:23:45.120 Atomic Write Unit (PFail): 1 00:23:45.120 Atomic Compare & Write Unit: 1 00:23:45.120 Fused Compare & Write: Supported 00:23:45.120 Scatter-Gather List 00:23:45.120 SGL Command Set: Supported 00:23:45.120 SGL Keyed: Supported 00:23:45.120 SGL Bit Bucket Descriptor: Not Supported 00:23:45.120 SGL Metadata Pointer: Not Supported 00:23:45.120 Oversized SGL: Not Supported 00:23:45.120 SGL Metadata Address: Not Supported 00:23:45.120 SGL Offset: Supported 00:23:45.120 Transport SGL Data Block: Not Supported 00:23:45.120 Replay Protected Memory Block: Not Supported 00:23:45.120 00:23:45.120 Firmware Slot Information 00:23:45.120 ========================= 00:23:45.120 Active slot: 1 00:23:45.120 Slot 1 Firmware Revision: 25.01 00:23:45.120 00:23:45.120 00:23:45.120 Commands Supported and Effects 00:23:45.120 ============================== 00:23:45.120 Admin Commands 00:23:45.120 -------------- 00:23:45.120 Get Log Page (02h): Supported 00:23:45.120 Identify (06h): Supported 00:23:45.120 Abort (08h): Supported 00:23:45.120 Set Features (09h): Supported 00:23:45.120 Get Features (0Ah): Supported 00:23:45.120 Asynchronous Event Request (0Ch): Supported 00:23:45.120 Keep Alive (18h): Supported 00:23:45.120 I/O Commands 00:23:45.120 ------------ 00:23:45.120 Flush (00h): Supported LBA-Change 00:23:45.120 Write (01h): Supported LBA-Change 00:23:45.120 Read (02h): Supported 00:23:45.120 Compare (05h): Supported 00:23:45.120 Write Zeroes (08h): Supported LBA-Change 00:23:45.120 Dataset Management (09h): Supported LBA-Change 00:23:45.120 Copy (19h): Supported LBA-Change 00:23:45.120 00:23:45.120 Error Log 00:23:45.120 ========= 00:23:45.121 00:23:45.121 Arbitration 00:23:45.121 =========== 00:23:45.121 Arbitration Burst: 1 00:23:45.121 00:23:45.121 Power Management 00:23:45.121 ================ 00:23:45.121 Number of Power States: 1 00:23:45.121 Current Power State: Power State #0 00:23:45.121 Power State #0: 00:23:45.121 Max Power: 0.00 W 00:23:45.121 Non-Operational State: Operational 00:23:45.121 Entry Latency: Not Reported 00:23:45.121 Exit Latency: Not Reported 00:23:45.121 Relative Read Throughput: 0 00:23:45.121 Relative Read Latency: 0 00:23:45.121 Relative Write Throughput: 0 00:23:45.121 Relative Write Latency: 0 00:23:45.121 Idle Power: Not Reported 00:23:45.121 Active Power: Not Reported 00:23:45.121 Non-Operational Permissive Mode: Not Supported 00:23:45.121 00:23:45.121 Health Information 00:23:45.121 ================== 00:23:45.121 Critical Warnings: 00:23:45.121 Available Spare Space: OK 00:23:45.121 Temperature: OK 00:23:45.121 Device Reliability: OK 00:23:45.121 Read Only: No 00:23:45.121 Volatile Memory Backup: OK 00:23:45.121 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:45.121 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:45.121 Available Spare: 0% 00:23:45.121 Available Spare Threshold: 0% 00:23:45.121 Life Percentage Used:[2024-12-08 06:27:35.156230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.156241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1320690) 00:23:45.121 [2024-12-08 06:27:35.156252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.121 [2024-12-08 06:27:35.156276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382b80, cid 7, qid 0 00:23:45.121 [2024-12-08 06:27:35.156458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.121 [2024-12-08 06:27:35.156471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.121 [2024-12-08 06:27:35.156478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.156484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382b80) on tqpair=0x1320690 00:23:45.121 [2024-12-08 06:27:35.156531] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:45.121 [2024-12-08 06:27:35.156551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382100) on tqpair=0x1320690 00:23:45.121 [2024-12-08 06:27:35.156561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.121 [2024-12-08 06:27:35.156570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382280) on tqpair=0x1320690 00:23:45.121 [2024-12-08 06:27:35.156577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.121 [2024-12-08 06:27:35.156584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382400) on tqpair=0x1320690 00:23:45.121 [2024-12-08 06:27:35.156591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.121 [2024-12-08 06:27:35.156599] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382580) on tqpair=0x1320690 00:23:45.121 [2024-12-08 06:27:35.156606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.121 [2024-12-08 06:27:35.156617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.156625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.156631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1320690) 00:23:45.121 [2024-12-08 06:27:35.156641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.121 [2024-12-08 06:27:35.156663] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382580, cid 3, qid 0 00:23:45.121 [2024-12-08 06:27:35.156818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.121 [2024-12-08 06:27:35.156833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.121 [2024-12-08 06:27:35.156840] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.156847] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382580) on tqpair=0x1320690 00:23:45.121 [2024-12-08 06:27:35.156858] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.156866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.156872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1320690) 00:23:45.121 [2024-12-08 06:27:35.156882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.121 [2024-12-08 06:27:35.156910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382580, cid 3, qid 0 00:23:45.121 [2024-12-08 06:27:35.157026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.121 [2024-12-08 06:27:35.157041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.121 [2024-12-08 06:27:35.157047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.157054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382580) on tqpair=0x1320690 00:23:45.121 [2024-12-08 06:27:35.157061] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:45.121 [2024-12-08 06:27:35.157069] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:45.121 [2024-12-08 06:27:35.157085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.157093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.157099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1320690) 00:23:45.121 [2024-12-08 06:27:35.157109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.121 [2024-12-08 06:27:35.157131] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382580, cid 3, qid 0 00:23:45.121 [2024-12-08 06:27:35.157287] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.121 [2024-12-08 06:27:35.157298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.121 [2024-12-08 06:27:35.157305] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.157311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382580) on tqpair=0x1320690 00:23:45.121 [2024-12-08 06:27:35.157327] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.157336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.157342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1320690) 00:23:45.121 [2024-12-08 06:27:35.157352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.121 [2024-12-08 06:27:35.157372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382580, cid 3, qid 0 00:23:45.121 [2024-12-08 06:27:35.157485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.121 [2024-12-08 06:27:35.157499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.121 [2024-12-08 06:27:35.157506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.157512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382580) on tqpair=0x1320690 00:23:45.121 [2024-12-08 06:27:35.157529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.157538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.157544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1320690) 00:23:45.121 [2024-12-08 06:27:35.157553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.121 [2024-12-08 06:27:35.157574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382580, cid 3, qid 0 00:23:45.121 [2024-12-08 06:27:35.157656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.121 [2024-12-08 06:27:35.157669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.121 [2024-12-08 06:27:35.157676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.157682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382580) on tqpair=0x1320690 00:23:45.121 [2024-12-08 06:27:35.157714] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.157730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.157737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1320690) 00:23:45.121 [2024-12-08 06:27:35.157747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.121 [2024-12-08 06:27:35.157788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382580, cid 3, qid 0 00:23:45.121 [2024-12-08 06:27:35.157890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.121 [2024-12-08 06:27:35.157904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.121 [2024-12-08 06:27:35.157911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.157918] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382580) on tqpair=0x1320690 00:23:45.121 [2024-12-08 06:27:35.157935] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.157945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.157951] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1320690) 00:23:45.121 [2024-12-08 06:27:35.157962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.121 [2024-12-08 06:27:35.157983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382580, cid 3, qid 0 00:23:45.121 [2024-12-08 06:27:35.158099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.121 [2024-12-08 06:27:35.158113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.121 [2024-12-08 06:27:35.158119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.158126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382580) on tqpair=0x1320690 00:23:45.121 [2024-12-08 06:27:35.158143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.158152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.121 [2024-12-08 06:27:35.158158] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1320690) 00:23:45.121 [2024-12-08 06:27:35.158168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.121 [2024-12-08 06:27:35.158188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382580, cid 3, qid 0 00:23:45.121 [2024-12-08 06:27:35.158270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.121 [2024-12-08 06:27:35.158283] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.122 [2024-12-08 06:27:35.158290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.158296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382580) on tqpair=0x1320690 00:23:45.122 [2024-12-08 06:27:35.158311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.158320] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.158326] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1320690) 00:23:45.122 [2024-12-08 06:27:35.158336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.122 [2024-12-08 06:27:35.158357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382580, cid 3, qid 0 00:23:45.122 [2024-12-08 06:27:35.158439] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.122 [2024-12-08 06:27:35.158452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.122 [2024-12-08 06:27:35.158458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.158465] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382580) on tqpair=0x1320690 00:23:45.122 [2024-12-08 06:27:35.158480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.158488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.158495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1320690) 00:23:45.122 [2024-12-08 06:27:35.158504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.122 [2024-12-08 06:27:35.158525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382580, cid 3, qid 0 00:23:45.122 [2024-12-08 06:27:35.158625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.122 [2024-12-08 06:27:35.158638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.122 [2024-12-08 06:27:35.158645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.158651] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382580) on tqpair=0x1320690 00:23:45.122 [2024-12-08 06:27:35.158666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.158675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.158681] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1320690) 00:23:45.122 [2024-12-08 06:27:35.158691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.122 [2024-12-08 06:27:35.158736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382580, cid 3, qid 0 00:23:45.122 [2024-12-08 06:27:35.158824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.122 [2024-12-08 06:27:35.158838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.122 [2024-12-08 06:27:35.158845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.158851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382580) on tqpair=0x1320690 00:23:45.122 [2024-12-08 06:27:35.158867] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.158876] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.158882] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1320690) 00:23:45.122 [2024-12-08 06:27:35.158892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.122 [2024-12-08 06:27:35.158913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382580, cid 3, qid 0 00:23:45.122 [2024-12-08 06:27:35.158998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.122 [2024-12-08 06:27:35.159027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.122 [2024-12-08 06:27:35.159034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.159040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382580) on tqpair=0x1320690 00:23:45.122 [2024-12-08 06:27:35.159057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.159066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.159072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1320690) 00:23:45.122 [2024-12-08 06:27:35.159082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.122 [2024-12-08 06:27:35.159103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382580, cid 3, qid 0 00:23:45.122 [2024-12-08 06:27:35.159187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.122 [2024-12-08 06:27:35.159199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.122 [2024-12-08 06:27:35.159206] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.159213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382580) on tqpair=0x1320690 00:23:45.122 [2024-12-08 06:27:35.159228] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.159237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.159242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1320690) 00:23:45.122 [2024-12-08 06:27:35.159252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.122 [2024-12-08 06:27:35.159273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382580, cid 3, qid 0 00:23:45.122 [2024-12-08 06:27:35.159368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.122 [2024-12-08 06:27:35.159384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.122 [2024-12-08 06:27:35.159391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.159397] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382580) on tqpair=0x1320690 00:23:45.122 [2024-12-08 06:27:35.159413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.159422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.159428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1320690) 00:23:45.122 [2024-12-08 06:27:35.159437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.122 [2024-12-08 06:27:35.159458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382580, cid 3, qid 0 00:23:45.122 [2024-12-08 06:27:35.159540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.122 [2024-12-08 06:27:35.159553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.122 [2024-12-08 06:27:35.159559] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.159566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382580) on tqpair=0x1320690 00:23:45.122 [2024-12-08 06:27:35.159581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.159590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.159596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1320690) 00:23:45.122 [2024-12-08 06:27:35.159605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.122 [2024-12-08 06:27:35.159626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382580, cid 3, qid 0 00:23:45.122 [2024-12-08 06:27:35.163736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.122 [2024-12-08 06:27:35.163752] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.122 [2024-12-08 06:27:35.163760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.163766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382580) on tqpair=0x1320690 00:23:45.122 [2024-12-08 06:27:35.163785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.163794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.163801] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1320690) 00:23:45.122 [2024-12-08 06:27:35.163811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.122 [2024-12-08 06:27:35.163834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1382580, cid 3, qid 0 00:23:45.122 [2024-12-08 06:27:35.163982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:45.122 [2024-12-08 06:27:35.163995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:45.122 [2024-12-08 06:27:35.164002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:45.122 [2024-12-08 06:27:35.164009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1382580) on tqpair=0x1320690 00:23:45.122 [2024-12-08 06:27:35.164037] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:23:45.122 0% 00:23:45.122 Data Units Read: 0 00:23:45.122 Data Units Written: 0 00:23:45.122 Host Read Commands: 0 00:23:45.122 Host Write Commands: 0 00:23:45.122 Controller Busy Time: 0 minutes 00:23:45.122 Power Cycles: 0 00:23:45.122 Power On Hours: 0 hours 00:23:45.122 Unsafe Shutdowns: 0 00:23:45.122 Unrecoverable Media Errors: 0 00:23:45.122 Lifetime Error Log Entries: 0 00:23:45.122 Warning Temperature Time: 0 minutes 00:23:45.122 Critical Temperature Time: 0 minutes 00:23:45.122 00:23:45.122 Number of Queues 00:23:45.122 ================ 00:23:45.122 Number of I/O Submission Queues: 127 00:23:45.122 Number of I/O Completion Queues: 127 00:23:45.122 00:23:45.122 Active Namespaces 00:23:45.122 ================= 00:23:45.122 Namespace ID:1 00:23:45.122 Error Recovery Timeout: Unlimited 00:23:45.122 Command Set Identifier: NVM (00h) 00:23:45.122 Deallocate: Supported 00:23:45.122 Deallocated/Unwritten Error: Not Supported 00:23:45.122 Deallocated Read Value: Unknown 00:23:45.122 Deallocate in Write Zeroes: Not Supported 00:23:45.122 Deallocated Guard Field: 0xFFFF 00:23:45.123 Flush: Supported 00:23:45.123 Reservation: Supported 00:23:45.123 Namespace Sharing Capabilities: Multiple Controllers 00:23:45.123 Size (in LBAs): 131072 (0GiB) 00:23:45.123 Capacity (in LBAs): 131072 (0GiB) 00:23:45.123 Utilization (in LBAs): 131072 (0GiB) 00:23:45.123 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:45.123 EUI64: ABCDEF0123456789 00:23:45.123 UUID: 99296e61-130e-4a6a-957c-5f8c2af1a2fb 00:23:45.123 Thin Provisioning: Not Supported 00:23:45.123 Per-NS Atomic Units: Yes 00:23:45.123 Atomic Boundary Size (Normal): 0 00:23:45.123 Atomic Boundary Size (PFail): 0 00:23:45.123 Atomic Boundary Offset: 0 00:23:45.123 Maximum Single Source Range Length: 65535 00:23:45.123 Maximum Copy Length: 65535 00:23:45.123 Maximum Source Range Count: 1 00:23:45.123 NGUID/EUI64 Never Reused: No 00:23:45.123 Namespace Write Protected: No 00:23:45.123 Number of LBA Formats: 1 00:23:45.123 Current LBA Format: LBA Format #00 00:23:45.123 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:45.123 00:23:45.123 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:45.123 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:45.123 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.123 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:45.123 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.123 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:45.123 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:45.123 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:45.123 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:45.123 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:45.123 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:45.123 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:45.123 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:45.123 rmmod nvme_tcp 00:23:45.123 rmmod nvme_fabrics 00:23:45.123 rmmod nvme_keyring 00:23:45.380 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:45.380 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:45.380 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:45.380 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1127147 ']' 00:23:45.380 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1127147 00:23:45.380 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1127147 ']' 00:23:45.380 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1127147 00:23:45.380 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:45.380 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.380 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1127147 00:23:45.380 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:45.380 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:45.380 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1127147' 00:23:45.380 killing process with pid 1127147 00:23:45.380 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1127147 00:23:45.380 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1127147 00:23:45.638 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:45.638 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:45.638 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:45.638 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:45.638 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:45.638 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:45.638 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:45.638 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:45.638 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:45.638 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.638 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.638 06:27:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.546 06:27:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:47.546 00:23:47.546 real 0m5.697s 00:23:47.546 user 0m4.829s 00:23:47.546 sys 0m2.030s 00:23:47.546 06:27:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.546 06:27:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.546 ************************************ 00:23:47.546 END TEST nvmf_identify 00:23:47.546 ************************************ 00:23:47.546 06:27:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:47.546 06:27:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:47.546 06:27:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.546 06:27:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.546 ************************************ 00:23:47.546 START TEST nvmf_perf 00:23:47.546 ************************************ 00:23:47.546 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:47.806 * Looking for test storage... 00:23:47.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:47.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.806 --rc genhtml_branch_coverage=1 00:23:47.806 --rc genhtml_function_coverage=1 00:23:47.806 --rc genhtml_legend=1 00:23:47.806 --rc geninfo_all_blocks=1 00:23:47.806 --rc geninfo_unexecuted_blocks=1 00:23:47.806 00:23:47.806 ' 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:47.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.806 --rc genhtml_branch_coverage=1 00:23:47.806 --rc genhtml_function_coverage=1 00:23:47.806 --rc genhtml_legend=1 00:23:47.806 --rc geninfo_all_blocks=1 00:23:47.806 --rc geninfo_unexecuted_blocks=1 00:23:47.806 00:23:47.806 ' 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:47.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.806 --rc genhtml_branch_coverage=1 00:23:47.806 --rc genhtml_function_coverage=1 00:23:47.806 --rc genhtml_legend=1 00:23:47.806 --rc geninfo_all_blocks=1 00:23:47.806 --rc geninfo_unexecuted_blocks=1 00:23:47.806 00:23:47.806 ' 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:47.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.806 --rc genhtml_branch_coverage=1 00:23:47.806 --rc genhtml_function_coverage=1 00:23:47.806 --rc genhtml_legend=1 00:23:47.806 --rc geninfo_all_blocks=1 00:23:47.806 --rc geninfo_unexecuted_blocks=1 00:23:47.806 00:23:47.806 ' 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.806 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:47.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:47.807 06:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.341 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:50.342 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:50.342 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:50.342 Found net devices under 0000:84:00.0: cvl_0_0 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:50.342 Found net devices under 0000:84:00.1: cvl_0_1 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:50.342 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:50.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:23:50.342 00:23:50.342 --- 10.0.0.2 ping statistics --- 00:23:50.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.342 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:23:50.342 00:23:50.342 --- 10.0.0.1 ping statistics --- 00:23:50.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.342 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1129256 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1129256 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1129256 ']' 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.342 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:50.342 [2024-12-08 06:27:40.132844] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:23:50.342 [2024-12-08 06:27:40.132925] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.342 [2024-12-08 06:27:40.203530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:50.342 [2024-12-08 06:27:40.257771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.342 [2024-12-08 06:27:40.257844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.342 [2024-12-08 06:27:40.257872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.343 [2024-12-08 06:27:40.257884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.343 [2024-12-08 06:27:40.257893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.343 [2024-12-08 06:27:40.259513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.343 [2024-12-08 06:27:40.259622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.343 [2024-12-08 06:27:40.259715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:50.343 [2024-12-08 06:27:40.259717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.343 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.343 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:50.343 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:50.343 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:50.343 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:50.343 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.343 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:50.343 06:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:53.622 06:27:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:53.622 06:27:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:53.880 06:27:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:23:53.880 06:27:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:54.138 06:27:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:54.138 06:27:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:23:54.138 06:27:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:54.138 06:27:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:54.138 06:27:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:54.396 [2024-12-08 06:27:44.351181] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.396 06:27:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:54.654 06:27:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:54.655 06:27:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:54.913 06:27:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:54.913 06:27:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:55.171 06:27:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:55.429 [2024-12-08 06:27:45.463209] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.429 06:27:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:55.687 06:27:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:23:55.687 06:27:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:23:55.687 06:27:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:55.687 06:27:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:23:57.060 Initializing NVMe Controllers 00:23:57.060 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:23:57.060 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:23:57.060 Initialization complete. Launching workers. 00:23:57.060 ======================================================== 00:23:57.060 Latency(us) 00:23:57.060 Device Information : IOPS MiB/s Average min max 00:23:57.060 PCIE (0000:82:00.0) NSID 1 from core 0: 86393.89 337.48 369.87 33.90 8266.84 00:23:57.060 ======================================================== 00:23:57.060 Total : 86393.89 337.48 369.87 33.90 8266.84 00:23:57.060 00:23:57.060 06:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:58.430 Initializing NVMe Controllers 00:23:58.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:58.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:58.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:58.430 Initialization complete. Launching workers. 00:23:58.430 ======================================================== 00:23:58.430 Latency(us) 00:23:58.430 Device Information : IOPS MiB/s Average min max 00:23:58.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 104.00 0.41 9918.55 140.66 45714.09 00:23:58.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16466.38 6929.88 47899.06 00:23:58.430 ======================================================== 00:23:58.430 Total : 165.00 0.64 12339.27 140.66 47899.06 00:23:58.430 00:23:58.430 06:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:59.802 Initializing NVMe Controllers 00:23:59.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:59.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:59.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:59.802 Initialization complete. Launching workers. 00:23:59.802 ======================================================== 00:23:59.802 Latency(us) 00:23:59.802 Device Information : IOPS MiB/s Average min max 00:23:59.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8333.75 32.55 3838.36 578.99 7611.15 00:23:59.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3836.23 14.99 8390.08 4942.74 16086.58 00:23:59.802 ======================================================== 00:23:59.802 Total : 12169.98 47.54 5273.16 578.99 16086.58 00:23:59.802 00:23:59.802 06:27:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:59.802 06:27:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:59.802 06:27:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:02.331 Initializing NVMe Controllers 00:24:02.331 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:02.331 Controller IO queue size 128, less than required. 00:24:02.331 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:02.331 Controller IO queue size 128, less than required. 00:24:02.331 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:02.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:02.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:02.331 Initialization complete. Launching workers. 00:24:02.331 ======================================================== 00:24:02.331 Latency(us) 00:24:02.331 Device Information : IOPS MiB/s Average min max 00:24:02.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1350.42 337.60 97186.74 66176.86 145247.50 00:24:02.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 547.97 136.99 238865.83 78380.27 381787.33 00:24:02.331 ======================================================== 00:24:02.331 Total : 1898.38 474.60 138082.26 66176.86 381787.33 00:24:02.331 00:24:02.331 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:02.589 No valid NVMe controllers or AIO or URING devices found 00:24:02.589 Initializing NVMe Controllers 00:24:02.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:02.589 Controller IO queue size 128, less than required. 00:24:02.589 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:02.589 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:02.589 Controller IO queue size 128, less than required. 00:24:02.589 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:02.589 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:02.589 WARNING: Some requested NVMe devices were skipped 00:24:02.589 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:05.122 Initializing NVMe Controllers 00:24:05.122 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:05.122 Controller IO queue size 128, less than required. 00:24:05.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:05.122 Controller IO queue size 128, less than required. 00:24:05.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:05.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:05.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:05.122 Initialization complete. Launching workers. 00:24:05.122 00:24:05.122 ==================== 00:24:05.122 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:05.122 TCP transport: 00:24:05.122 polls: 8199 00:24:05.122 idle_polls: 5770 00:24:05.122 sock_completions: 2429 00:24:05.122 nvme_completions: 4727 00:24:05.122 submitted_requests: 7130 00:24:05.122 queued_requests: 1 00:24:05.122 00:24:05.122 ==================== 00:24:05.122 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:05.122 TCP transport: 00:24:05.122 polls: 10734 00:24:05.122 idle_polls: 8155 00:24:05.122 sock_completions: 2579 00:24:05.122 nvme_completions: 4855 00:24:05.122 submitted_requests: 7266 00:24:05.122 queued_requests: 1 00:24:05.122 ======================================================== 00:24:05.122 Latency(us) 00:24:05.122 Device Information : IOPS MiB/s Average min max 00:24:05.122 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1179.10 294.77 113219.58 62852.70 194449.53 00:24:05.122 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1211.03 302.76 106683.82 55494.38 160895.25 00:24:05.122 ======================================================== 00:24:05.122 Total : 2390.13 597.53 109908.04 55494.38 194449.53 00:24:05.122 00:24:05.379 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:05.379 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.679 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:05.679 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:05.679 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:05.679 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:05.679 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:05.679 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:05.679 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:05.679 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:05.679 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:05.679 rmmod nvme_tcp 00:24:05.679 rmmod nvme_fabrics 00:24:05.679 rmmod nvme_keyring 00:24:05.679 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:05.679 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:05.679 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:05.679 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1129256 ']' 00:24:05.679 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1129256 00:24:05.680 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1129256 ']' 00:24:05.680 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1129256 00:24:05.680 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:05.680 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.680 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1129256 00:24:05.680 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:05.680 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:05.680 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1129256' 00:24:05.680 killing process with pid 1129256 00:24:05.680 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1129256 00:24:05.680 06:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1129256 00:24:07.084 06:27:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:07.084 06:27:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:07.084 06:27:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:07.084 06:27:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:07.084 06:27:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:07.084 06:27:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:07.084 06:27:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:07.084 06:27:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:07.084 06:27:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:07.084 06:27:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.084 06:27:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.084 06:27:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.623 06:27:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:09.623 00:24:09.623 real 0m21.599s 00:24:09.623 user 1m6.420s 00:24:09.623 sys 0m5.990s 00:24:09.623 06:27:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:09.623 06:27:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:09.623 ************************************ 00:24:09.623 END TEST nvmf_perf 00:24:09.623 ************************************ 00:24:09.623 06:27:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:09.623 06:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:09.623 06:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:09.623 06:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.623 ************************************ 00:24:09.623 START TEST nvmf_fio_host 00:24:09.623 ************************************ 00:24:09.623 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:09.623 * Looking for test storage... 00:24:09.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:09.623 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:09.623 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:09.623 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:09.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.624 --rc genhtml_branch_coverage=1 00:24:09.624 --rc genhtml_function_coverage=1 00:24:09.624 --rc genhtml_legend=1 00:24:09.624 --rc geninfo_all_blocks=1 00:24:09.624 --rc geninfo_unexecuted_blocks=1 00:24:09.624 00:24:09.624 ' 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:09.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.624 --rc genhtml_branch_coverage=1 00:24:09.624 --rc genhtml_function_coverage=1 00:24:09.624 --rc genhtml_legend=1 00:24:09.624 --rc geninfo_all_blocks=1 00:24:09.624 --rc geninfo_unexecuted_blocks=1 00:24:09.624 00:24:09.624 ' 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:09.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.624 --rc genhtml_branch_coverage=1 00:24:09.624 --rc genhtml_function_coverage=1 00:24:09.624 --rc genhtml_legend=1 00:24:09.624 --rc geninfo_all_blocks=1 00:24:09.624 --rc geninfo_unexecuted_blocks=1 00:24:09.624 00:24:09.624 ' 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:09.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.624 --rc genhtml_branch_coverage=1 00:24:09.624 --rc genhtml_function_coverage=1 00:24:09.624 --rc genhtml_legend=1 00:24:09.624 --rc geninfo_all_blocks=1 00:24:09.624 --rc geninfo_unexecuted_blocks=1 00:24:09.624 00:24:09.624 ' 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.624 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:09.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:09.625 06:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:12.156 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.156 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:12.157 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:12.157 Found net devices under 0000:84:00.0: cvl_0_0 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:12.157 Found net devices under 0000:84:00.1: cvl_0_1 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:12.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:24:12.157 00:24:12.157 --- 10.0.0.2 ping statistics --- 00:24:12.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.157 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:24:12.157 00:24:12.157 --- 10.0.0.1 ping statistics --- 00:24:12.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.157 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1133250 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1133250 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1133250 ']' 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.157 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.157 [2024-12-08 06:28:01.939667] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:24:12.157 [2024-12-08 06:28:01.939756] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.157 [2024-12-08 06:28:02.013534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:12.157 [2024-12-08 06:28:02.074467] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.157 [2024-12-08 06:28:02.074540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.157 [2024-12-08 06:28:02.074569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.157 [2024-12-08 06:28:02.074581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.157 [2024-12-08 06:28:02.074599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.157 [2024-12-08 06:28:02.076449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.157 [2024-12-08 06:28:02.076515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.157 [2024-12-08 06:28:02.076581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:12.157 [2024-12-08 06:28:02.076584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.157 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.157 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:12.157 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:12.415 [2024-12-08 06:28:02.446829] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.415 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:12.415 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:12.415 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.415 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:12.980 Malloc1 00:24:12.980 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:13.237 06:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:13.494 06:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.751 [2024-12-08 06:28:03.720381] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.751 06:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:14.008 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:14.008 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:14.008 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:14.008 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:14.008 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:14.008 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:14.008 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:14.008 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:14.008 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:14.008 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.008 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:14.008 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:14.008 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:14.008 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:14.008 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:14.008 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.008 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:14.009 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:14.009 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:14.009 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:14.009 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:14.009 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:14.009 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:14.266 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:14.266 fio-3.35 00:24:14.266 Starting 1 thread 00:24:16.790 00:24:16.790 test: (groupid=0, jobs=1): err= 0: pid=1133724: Sun Dec 8 06:28:06 2024 00:24:16.790 read: IOPS=8906, BW=34.8MiB/s (36.5MB/s)(69.8MiB/2006msec) 00:24:16.790 slat (usec): min=2, max=116, avg= 2.99, stdev= 2.14 00:24:16.790 clat (usec): min=2386, max=13741, avg=7864.17, stdev=630.42 00:24:16.790 lat (usec): min=2409, max=13744, avg=7867.16, stdev=630.31 00:24:16.790 clat percentiles (usec): 00:24:16.790 | 1.00th=[ 6390], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:24:16.790 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:24:16.790 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8848], 00:24:16.790 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[11994], 99.95th=[13304], 00:24:16.790 | 99.99th=[13698] 00:24:16.790 bw ( KiB/s): min=34816, max=36000, per=99.92%, avg=35598.00, stdev=533.08, samples=4 00:24:16.790 iops : min= 8704, max= 9000, avg=8899.50, stdev=133.27, samples=4 00:24:16.790 write: IOPS=8922, BW=34.9MiB/s (36.5MB/s)(69.9MiB/2006msec); 0 zone resets 00:24:16.790 slat (nsec): min=2352, max=98926, avg=3223.94, stdev=2087.11 00:24:16.790 clat (usec): min=1093, max=12025, avg=6454.42, stdev=522.64 00:24:16.790 lat (usec): min=1101, max=12028, avg=6457.65, stdev=522.57 00:24:16.790 clat percentiles (usec): 00:24:16.790 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:24:16.790 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:24:16.790 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:24:16.790 | 99.00th=[ 7570], 99.50th=[ 7701], 99.90th=[10159], 99.95th=[11076], 00:24:16.790 | 99.99th=[11863] 00:24:16.790 bw ( KiB/s): min=35360, max=35944, per=99.97%, avg=35680.00, stdev=241.15, samples=4 00:24:16.790 iops : min= 8840, max= 8986, avg=8920.00, stdev=60.29, samples=4 00:24:16.790 lat (msec) : 2=0.03%, 4=0.11%, 10=99.71%, 20=0.14% 00:24:16.790 cpu : usr=70.92%, sys=27.83%, ctx=68, majf=0, minf=30 00:24:16.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:16.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:16.790 issued rwts: total=17867,17899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:16.790 00:24:16.790 Run status group 0 (all jobs): 00:24:16.790 READ: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.8MiB (73.2MB), run=2006-2006msec 00:24:16.790 WRITE: bw=34.9MiB/s (36.5MB/s), 34.9MiB/s-34.9MiB/s (36.5MB/s-36.5MB/s), io=69.9MiB (73.3MB), run=2006-2006msec 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:16.790 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:16.790 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:16.790 fio-3.35 00:24:16.790 Starting 1 thread 00:24:19.320 00:24:19.320 test: (groupid=0, jobs=1): err= 0: pid=1134062: Sun Dec 8 06:28:09 2024 00:24:19.320 read: IOPS=8214, BW=128MiB/s (135MB/s)(258MiB/2010msec) 00:24:19.320 slat (usec): min=2, max=132, avg= 4.27, stdev= 2.40 00:24:19.320 clat (usec): min=2437, max=16568, avg=8966.97, stdev=2028.11 00:24:19.320 lat (usec): min=2441, max=16571, avg=8971.24, stdev=2028.14 00:24:19.320 clat percentiles (usec): 00:24:19.321 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7177], 00:24:19.321 | 30.00th=[ 7767], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9503], 00:24:19.321 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11600], 95.00th=[12387], 00:24:19.321 | 99.00th=[14222], 99.50th=[14615], 99.90th=[15270], 99.95th=[15401], 00:24:19.321 | 99.99th=[15795] 00:24:19.321 bw ( KiB/s): min=53504, max=79776, per=50.89%, avg=66888.00, stdev=11789.42, samples=4 00:24:19.321 iops : min= 3344, max= 4986, avg=4180.50, stdev=736.84, samples=4 00:24:19.321 write: IOPS=4842, BW=75.7MiB/s (79.3MB/s)(137MiB/1812msec); 0 zone resets 00:24:19.321 slat (usec): min=30, max=192, avg=37.61, stdev= 6.37 00:24:19.321 clat (usec): min=4640, max=20380, avg=11691.76, stdev=1897.93 00:24:19.321 lat (usec): min=4677, max=20431, avg=11729.37, stdev=1897.66 00:24:19.321 clat percentiles (usec): 00:24:19.321 | 1.00th=[ 8094], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[10159], 00:24:19.321 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11469], 60.00th=[11994], 00:24:19.321 | 70.00th=[12387], 80.00th=[13173], 90.00th=[14222], 95.00th=[15270], 00:24:19.321 | 99.00th=[16712], 99.50th=[17171], 99.90th=[19792], 99.95th=[20055], 00:24:19.321 | 99.99th=[20317] 00:24:19.321 bw ( KiB/s): min=55552, max=82944, per=90.10%, avg=69816.00, stdev=12174.60, samples=4 00:24:19.321 iops : min= 3472, max= 5184, avg=4363.50, stdev=760.91, samples=4 00:24:19.321 lat (msec) : 4=0.16%, 10=49.79%, 20=50.02%, 50=0.03% 00:24:19.321 cpu : usr=83.18%, sys=15.92%, ctx=37, majf=0, minf=66 00:24:19.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:19.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:19.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:19.321 issued rwts: total=16512,8775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:19.321 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:19.321 00:24:19.321 Run status group 0 (all jobs): 00:24:19.321 READ: bw=128MiB/s (135MB/s), 128MiB/s-128MiB/s (135MB/s-135MB/s), io=258MiB (271MB), run=2010-2010msec 00:24:19.321 WRITE: bw=75.7MiB/s (79.3MB/s), 75.7MiB/s-75.7MiB/s (79.3MB/s-79.3MB/s), io=137MiB (144MB), run=1812-1812msec 00:24:19.321 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:19.579 rmmod nvme_tcp 00:24:19.579 rmmod nvme_fabrics 00:24:19.579 rmmod nvme_keyring 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1133250 ']' 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1133250 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1133250 ']' 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1133250 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1133250 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1133250' 00:24:19.579 killing process with pid 1133250 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1133250 00:24:19.579 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1133250 00:24:19.839 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:19.839 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:19.839 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:19.839 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:19.839 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:19.839 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:19.839 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:19.839 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:19.839 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:19.839 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.839 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.839 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.747 06:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:21.747 00:24:21.747 real 0m12.554s 00:24:21.747 user 0m37.000s 00:24:21.747 sys 0m3.886s 00:24:21.747 06:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:21.747 06:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.748 ************************************ 00:24:21.748 END TEST nvmf_fio_host 00:24:21.748 ************************************ 00:24:22.009 06:28:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:22.009 06:28:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:22.009 06:28:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:22.009 06:28:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.009 ************************************ 00:24:22.009 START TEST nvmf_failover 00:24:22.009 ************************************ 00:24:22.009 06:28:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:22.009 * Looking for test storage... 00:24:22.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:22.009 06:28:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:22.009 06:28:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:24:22.009 06:28:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:22.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.009 --rc genhtml_branch_coverage=1 00:24:22.009 --rc genhtml_function_coverage=1 00:24:22.009 --rc genhtml_legend=1 00:24:22.009 --rc geninfo_all_blocks=1 00:24:22.009 --rc geninfo_unexecuted_blocks=1 00:24:22.009 00:24:22.009 ' 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:22.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.009 --rc genhtml_branch_coverage=1 00:24:22.009 --rc genhtml_function_coverage=1 00:24:22.009 --rc genhtml_legend=1 00:24:22.009 --rc geninfo_all_blocks=1 00:24:22.009 --rc geninfo_unexecuted_blocks=1 00:24:22.009 00:24:22.009 ' 00:24:22.009 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:22.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.010 --rc genhtml_branch_coverage=1 00:24:22.010 --rc genhtml_function_coverage=1 00:24:22.010 --rc genhtml_legend=1 00:24:22.010 --rc geninfo_all_blocks=1 00:24:22.010 --rc geninfo_unexecuted_blocks=1 00:24:22.010 00:24:22.010 ' 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:22.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.010 --rc genhtml_branch_coverage=1 00:24:22.010 --rc genhtml_function_coverage=1 00:24:22.010 --rc genhtml_legend=1 00:24:22.010 --rc geninfo_all_blocks=1 00:24:22.010 --rc geninfo_unexecuted_blocks=1 00:24:22.010 00:24:22.010 ' 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:22.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:22.010 06:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:24.546 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:24.546 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:24.546 Found net devices under 0000:84:00.0: cvl_0_0 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:24.546 Found net devices under 0000:84:00.1: cvl_0_1 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.546 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:24.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:24:24.547 00:24:24.547 --- 10.0.0.2 ping statistics --- 00:24:24.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.547 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:24:24.547 00:24:24.547 --- 10.0.0.1 ping statistics --- 00:24:24.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.547 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1136280 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1136280 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1136280 ']' 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.547 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:24.547 [2024-12-08 06:28:14.436168] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:24:24.547 [2024-12-08 06:28:14.436277] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.547 [2024-12-08 06:28:14.515591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:24.547 [2024-12-08 06:28:14.573687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.547 [2024-12-08 06:28:14.573762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.547 [2024-12-08 06:28:14.573791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.547 [2024-12-08 06:28:14.573803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.547 [2024-12-08 06:28:14.573812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.547 [2024-12-08 06:28:14.575288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.547 [2024-12-08 06:28:14.575350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:24.547 [2024-12-08 06:28:14.575354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.803 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.803 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:24.803 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:24.803 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:24.803 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:24.803 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.803 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:25.061 [2024-12-08 06:28:14.971487] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.061 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:25.318 Malloc0 00:24:25.318 06:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:25.575 06:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:25.831 06:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.089 [2024-12-08 06:28:16.073421] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.089 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:26.346 [2024-12-08 06:28:16.342225] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:26.346 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:26.604 [2024-12-08 06:28:16.603210] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:26.604 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1136583 00:24:26.604 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:26.604 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:26.604 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1136583 /var/tmp/bdevperf.sock 00:24:26.604 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1136583 ']' 00:24:26.604 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:26.604 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.604 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:26.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:26.604 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.604 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:26.860 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.860 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:26.860 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:27.423 NVMe0n1 00:24:27.423 06:28:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:27.679 00:24:27.679 06:28:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1136717 00:24:27.679 06:28:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:27.680 06:28:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:28.611 06:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.869 06:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:32.149 06:28:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:32.422 00:24:32.422 06:28:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:32.681 06:28:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:35.978 06:28:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.978 [2024-12-08 06:28:25.996099] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.978 06:28:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:36.911 06:28:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:37.476 [2024-12-08 06:28:27.325810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15391e0 is same with the state(6) to be set 00:24:37.476 [2024-12-08 06:28:27.325871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15391e0 is same with the state(6) to be set 00:24:37.476 [2024-12-08 06:28:27.325885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15391e0 is same with the state(6) to be set 00:24:37.476 [2024-12-08 06:28:27.325898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15391e0 is same with the state(6) to be set 00:24:37.476 [2024-12-08 06:28:27.325909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15391e0 is same with the state(6) to be set 00:24:37.476 [2024-12-08 06:28:27.325921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15391e0 is same with the state(6) to be set 00:24:37.476 [2024-12-08 06:28:27.325932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15391e0 is same with the state(6) to be set 00:24:37.476 [2024-12-08 06:28:27.325944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15391e0 is same with the state(6) to be set 00:24:37.476 06:28:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1136717 00:24:42.767 { 00:24:42.767 "results": [ 00:24:42.767 { 00:24:42.767 "job": "NVMe0n1", 00:24:42.767 "core_mask": "0x1", 00:24:42.767 "workload": "verify", 00:24:42.767 "status": "finished", 00:24:42.767 "verify_range": { 00:24:42.767 "start": 0, 00:24:42.767 "length": 16384 00:24:42.767 }, 00:24:42.767 "queue_depth": 128, 00:24:42.767 "io_size": 4096, 00:24:42.767 "runtime": 15.00792, 00:24:42.767 "iops": 8511.705819327395, 00:24:42.767 "mibps": 33.24885085674764, 00:24:42.767 "io_failed": 12325, 00:24:42.767 "io_timeout": 0, 00:24:42.767 "avg_latency_us": 13688.873771913959, 00:24:42.767 "min_latency_us": 552.2014814814814, 00:24:42.767 "max_latency_us": 17184.995555555557 00:24:42.767 } 00:24:42.767 ], 00:24:42.767 "core_count": 1 00:24:42.767 } 00:24:42.767 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1136583 00:24:42.767 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1136583 ']' 00:24:42.767 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1136583 00:24:42.767 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:42.767 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.767 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1136583 00:24:42.767 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:42.767 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:42.767 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1136583' 00:24:42.767 killing process with pid 1136583 00:24:42.767 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1136583 00:24:42.767 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1136583 00:24:43.028 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:43.028 [2024-12-08 06:28:16.668281] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:24:43.028 [2024-12-08 06:28:16.668361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136583 ] 00:24:43.028 [2024-12-08 06:28:16.736768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.028 [2024-12-08 06:28:16.797953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.028 Running I/O for 15 seconds... 00:24:43.028 8657.00 IOPS, 33.82 MiB/s [2024-12-08T05:28:33.147Z] [2024-12-08 06:28:18.951905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.951984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.028 [2024-12-08 06:28:18.952807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.028 [2024-12-08 06:28:18.952821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.952836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.952850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.952872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.952886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.952901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.952922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.952938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.952952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.952967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.952980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.952995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.029 [2024-12-08 06:28:18.953068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.029 [2024-12-08 06:28:18.953101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.029 [2024-12-08 06:28:18.953129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.029 [2024-12-08 06:28:18.953157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.953974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.953989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.029 [2024-12-08 06:28:18.954772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.029 [2024-12-08 06:28:18.954786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.954801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.954815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.954830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.954844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.954859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.954873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.954888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.954902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.954918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.954931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.954947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.954961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.954980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.954995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:18.955356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.030 [2024-12-08 06:28:18.955820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211dfd0 is same with the state(6) to be set 00:24:43.030 [2024-12-08 06:28:18.955852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.030 [2024-12-08 06:28:18.955864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.030 [2024-12-08 06:28:18.955875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82368 len:8 PRP1 0x0 PRP2 0x0 00:24:43.030 [2024-12-08 06:28:18.955888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.955952] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:43.030 [2024-12-08 06:28:18.955997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.030 [2024-12-08 06:28:18.956015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.956030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.030 [2024-12-08 06:28:18.956044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.956057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.030 [2024-12-08 06:28:18.956076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.956090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.030 [2024-12-08 06:28:18.956103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:18.956116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:43.030 [2024-12-08 06:28:18.956179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f9820 (9): Bad file descriptor 00:24:43.030 [2024-12-08 06:28:18.959513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:43.030 [2024-12-08 06:28:19.034137] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:43.030 8279.50 IOPS, 32.34 MiB/s [2024-12-08T05:28:33.149Z] 8407.67 IOPS, 32.84 MiB/s [2024-12-08T05:28:33.149Z] 8489.00 IOPS, 33.16 MiB/s [2024-12-08T05:28:33.149Z] [2024-12-08 06:28:22.689545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.689626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.689665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.689694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.689711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.689738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.689762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.689777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.689792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.689806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.689822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.689836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.689851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.689865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.689880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.689894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.689909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.689922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.689937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.689951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.689971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.689985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.690000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.690013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.690028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.690042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.690057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.690071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.690091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.690106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.690120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.690134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.690149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.690163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.690178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.690193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.690208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.690222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.690237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.030 [2024-12-08 06:28:22.690251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.030 [2024-12-08 06:28:22.690266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.690280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.690309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.690337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.690365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.690394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.690422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.690455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.690484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.690513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.690542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.690570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.690599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.690627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.690656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.690686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.031 [2024-12-08 06:28:22.690715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.031 [2024-12-08 06:28:22.690764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.031 [2024-12-08 06:28:22.690794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.031 [2024-12-08 06:28:22.690823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.031 [2024-12-08 06:28:22.690856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.031 [2024-12-08 06:28:22.690885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.031 [2024-12-08 06:28:22.690913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.690942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.690970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.690985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.690998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.031 [2024-12-08 06:28:22.691816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.031 [2024-12-08 06:28:22.691845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.031 [2024-12-08 06:28:22.691874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.031 [2024-12-08 06:28:22.691902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.031 [2024-12-08 06:28:22.691931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.031 [2024-12-08 06:28:22.691960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.691979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.031 [2024-12-08 06:28:22.691993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.031 [2024-12-08 06:28:22.692008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.031 [2024-12-08 06:28:22.692022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.032 [2024-12-08 06:28:22.692283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.692965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.692980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.693003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.693031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.693060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.693089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.693119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.693152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.693181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.693210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.693239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.693267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.693296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.693325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.693354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.693383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.693412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.693440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:22.693470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.032 [2024-12-08 06:28:22.693523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.032 [2024-12-08 06:28:22.693536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98296 len:8 PRP1 0x0 PRP2 0x0 00:24:43.032 [2024-12-08 06:28:22.693549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693619] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:43.032 [2024-12-08 06:28:22.693658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.032 [2024-12-08 06:28:22.693677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.032 [2024-12-08 06:28:22.693706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.032 [2024-12-08 06:28:22.693746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.032 [2024-12-08 06:28:22.693781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:22.693795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:43.032 [2024-12-08 06:28:22.693838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f9820 (9): Bad file descriptor 00:24:43.032 [2024-12-08 06:28:22.697164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:43.032 [2024-12-08 06:28:22.766280] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:43.032 8389.20 IOPS, 32.77 MiB/s [2024-12-08T05:28:33.151Z] 8438.50 IOPS, 32.96 MiB/s [2024-12-08T05:28:33.151Z] 8488.29 IOPS, 33.16 MiB/s [2024-12-08T05:28:33.151Z] 8510.00 IOPS, 33.24 MiB/s [2024-12-08T05:28:33.151Z] 8526.78 IOPS, 33.31 MiB/s [2024-12-08T05:28:33.151Z] [2024-12-08 06:28:27.327507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.032 [2024-12-08 06:28:27.327566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:27.327598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.032 [2024-12-08 06:28:27.327614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:27.327631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.032 [2024-12-08 06:28:27.327646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:27.327661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.032 [2024-12-08 06:28:27.327675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:27.327690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.032 [2024-12-08 06:28:27.327704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:27.327740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.032 [2024-12-08 06:28:27.327756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:27.327771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.032 [2024-12-08 06:28:27.327785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.032 [2024-12-08 06:28:27.327800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.032 [2024-12-08 06:28:27.327814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.327829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.327842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.327857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.327870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.327886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.327913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.327930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.327944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.327959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.327974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.327989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.328982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.328997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.033 [2024-12-08 06:28:27.329127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.033 [2024-12-08 06:28:27.329156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.033 [2024-12-08 06:28:27.329185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.033 [2024-12-08 06:28:27.329214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.033 [2024-12-08 06:28:27.329243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.033 [2024-12-08 06:28:27.329275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.033 [2024-12-08 06:28:27.329306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.033 [2024-12-08 06:28:27.329741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.033 [2024-12-08 06:28:27.329756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.329772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.034 [2024-12-08 06:28:27.329786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.329801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.034 [2024-12-08 06:28:27.329815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.329830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.034 [2024-12-08 06:28:27.329844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.329860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.034 [2024-12-08 06:28:27.329873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.329888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.034 [2024-12-08 06:28:27.329902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.329917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.034 [2024-12-08 06:28:27.329931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.329947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.034 [2024-12-08 06:28:27.329960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.329975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.034 [2024-12-08 06:28:27.329989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.034 [2024-12-08 06:28:27.330023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.034 [2024-12-08 06:28:27.330053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.034 [2024-12-08 06:28:27.330082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.034 [2024-12-08 06:28:27.330111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.034 [2024-12-08 06:28:27.330140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.330188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45144 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.330202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.330233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.330245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45152 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.330259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.330283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.330294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45160 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.330307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.330331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.330342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45168 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.330354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.330378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.330389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44344 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.330402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.330430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.330441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44352 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.330454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.330478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.330489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44360 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.330502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.330526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.330537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44368 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.330549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.330573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.330589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44376 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.330602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.330627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.330638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44384 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.330652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.330677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.330689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44392 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.330702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.330735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.330747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44400 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.330761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.330786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.330797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45176 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.330810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.330841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.330853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45184 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.330866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.330891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.330914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45192 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.330928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.330952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.330964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45200 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.330977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.330990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.331001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.331012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45208 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.331024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.331037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.331048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.331059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45216 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.331072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.331085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.331096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.331108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45224 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.331120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.331134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.331144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.331155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45232 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.331168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.331182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.331193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.331203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45240 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.331219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.331234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.331245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.331255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45248 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.331268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.331281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.331291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.331303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45256 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.331316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.331329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.331339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.331350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45264 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.331362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.331375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.331385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.331397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45272 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.331409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.331422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.331433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.331444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45280 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.331457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.331470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.331481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.331492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45288 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.331504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.034 [2024-12-08 06:28:27.331517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.034 [2024-12-08 06:28:27.331527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.034 [2024-12-08 06:28:27.331538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45296 len:8 PRP1 0x0 PRP2 0x0 00:24:43.034 [2024-12-08 06:28:27.331551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.331564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.035 [2024-12-08 06:28:27.331574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.035 [2024-12-08 06:28:27.331589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44408 len:8 PRP1 0x0 PRP2 0x0 00:24:43.035 [2024-12-08 06:28:27.331602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.331615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.035 [2024-12-08 06:28:27.331626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.035 [2024-12-08 06:28:27.331637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44416 len:8 PRP1 0x0 PRP2 0x0 00:24:43.035 [2024-12-08 06:28:27.331649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.331662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.035 [2024-12-08 06:28:27.331673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.035 [2024-12-08 06:28:27.331684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44424 len:8 PRP1 0x0 PRP2 0x0 00:24:43.035 [2024-12-08 06:28:27.331697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.331710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.035 [2024-12-08 06:28:27.331726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.035 [2024-12-08 06:28:27.331740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44432 len:8 PRP1 0x0 PRP2 0x0 00:24:43.035 [2024-12-08 06:28:27.331753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.331766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.035 [2024-12-08 06:28:27.331777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.035 [2024-12-08 06:28:27.331788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44440 len:8 PRP1 0x0 PRP2 0x0 00:24:43.035 [2024-12-08 06:28:27.331801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.331814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.035 [2024-12-08 06:28:27.331824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.035 [2024-12-08 06:28:27.331835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44448 len:8 PRP1 0x0 PRP2 0x0 00:24:43.035 [2024-12-08 06:28:27.331854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.331867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.035 [2024-12-08 06:28:27.331878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.035 [2024-12-08 06:28:27.331889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44456 len:8 PRP1 0x0 PRP2 0x0 00:24:43.035 [2024-12-08 06:28:27.331902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.331914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.035 [2024-12-08 06:28:27.331925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.035 [2024-12-08 06:28:27.331936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44464 len:8 PRP1 0x0 PRP2 0x0 00:24:43.035 [2024-12-08 06:28:27.331949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.331965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.035 [2024-12-08 06:28:27.331976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.035 [2024-12-08 06:28:27.331987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44472 len:8 PRP1 0x0 PRP2 0x0 00:24:43.035 [2024-12-08 06:28:27.332000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.332013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.035 [2024-12-08 06:28:27.332023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.035 [2024-12-08 06:28:27.332040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44480 len:8 PRP1 0x0 PRP2 0x0 00:24:43.035 [2024-12-08 06:28:27.332053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.332066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.035 [2024-12-08 06:28:27.332077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.035 [2024-12-08 06:28:27.332088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44488 len:8 PRP1 0x0 PRP2 0x0 00:24:43.035 [2024-12-08 06:28:27.332101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.332114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.035 [2024-12-08 06:28:27.332125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.035 [2024-12-08 06:28:27.332135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44496 len:8 PRP1 0x0 PRP2 0x0 00:24:43.035 [2024-12-08 06:28:27.332148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.332161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.035 [2024-12-08 06:28:27.332172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.035 [2024-12-08 06:28:27.332183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44504 len:8 PRP1 0x0 PRP2 0x0 00:24:43.035 [2024-12-08 06:28:27.332195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.332208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.035 [2024-12-08 06:28:27.332219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.035 [2024-12-08 06:28:27.332230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44512 len:8 PRP1 0x0 PRP2 0x0 00:24:43.035 [2024-12-08 06:28:27.332243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.332256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.035 [2024-12-08 06:28:27.332267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.035 [2024-12-08 06:28:27.332278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44520 len:8 PRP1 0x0 PRP2 0x0 00:24:43.035 [2024-12-08 06:28:27.332290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.332361] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:43.035 [2024-12-08 06:28:27.332403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.035 [2024-12-08 06:28:27.332422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.332440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.035 [2024-12-08 06:28:27.332454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.332468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.035 [2024-12-08 06:28:27.332482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.332496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.035 [2024-12-08 06:28:27.332509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.035 [2024-12-08 06:28:27.332522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:43.035 [2024-12-08 06:28:27.332594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f9820 (9): Bad file descriptor 00:24:43.035 [2024-12-08 06:28:27.335883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:43.035 [2024-12-08 06:28:27.487441] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:43.035 8401.80 IOPS, 32.82 MiB/s [2024-12-08T05:28:33.154Z] 8429.27 IOPS, 32.93 MiB/s [2024-12-08T05:28:33.154Z] 8459.58 IOPS, 33.05 MiB/s [2024-12-08T05:28:33.154Z] 8465.77 IOPS, 33.07 MiB/s [2024-12-08T05:28:33.154Z] 8487.29 IOPS, 33.15 MiB/s [2024-12-08T05:28:33.154Z] 8507.67 IOPS, 33.23 MiB/s 00:24:43.035 Latency(us) 00:24:43.035 [2024-12-08T05:28:33.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.035 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:43.035 Verification LBA range: start 0x0 length 0x4000 00:24:43.035 NVMe0n1 : 15.01 8511.71 33.25 821.23 0.00 13688.87 552.20 17185.00 00:24:43.035 [2024-12-08T05:28:33.154Z] =================================================================================================================== 00:24:43.035 [2024-12-08T05:28:33.154Z] Total : 8511.71 33.25 821.23 0.00 13688.87 552.20 17185.00 00:24:43.035 Received shutdown signal, test time was about 15.000000 seconds 00:24:43.035 00:24:43.035 Latency(us) 00:24:43.035 [2024-12-08T05:28:33.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.035 [2024-12-08T05:28:33.154Z] =================================================================================================================== 00:24:43.035 [2024-12-08T05:28:33.154Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:43.035 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:43.035 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:43.035 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:43.035 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:43.035 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1138556 00:24:43.035 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1138556 /var/tmp/bdevperf.sock 00:24:43.035 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1138556 ']' 00:24:43.035 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.035 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.035 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.035 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.035 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:43.293 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.293 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:43.293 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:43.550 [2024-12-08 06:28:33.608300] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:43.550 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:43.807 [2024-12-08 06:28:33.873020] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:43.807 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:44.374 NVMe0n1 00:24:44.374 06:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:44.632 00:24:44.632 06:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:45.197 00:24:45.197 06:28:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:45.197 06:28:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:45.453 06:28:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:45.711 06:28:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:48.990 06:28:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:48.990 06:28:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:48.990 06:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1139231 00:24:48.990 06:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:48.990 06:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1139231 00:24:50.365 { 00:24:50.365 "results": [ 00:24:50.365 { 00:24:50.365 "job": "NVMe0n1", 00:24:50.365 "core_mask": "0x1", 00:24:50.365 "workload": "verify", 00:24:50.365 "status": "finished", 00:24:50.365 "verify_range": { 00:24:50.365 "start": 0, 00:24:50.365 "length": 16384 00:24:50.365 }, 00:24:50.365 "queue_depth": 128, 00:24:50.365 "io_size": 4096, 00:24:50.365 "runtime": 1.015032, 00:24:50.365 "iops": 8724.848083607216, 00:24:50.365 "mibps": 34.08143782659069, 00:24:50.365 "io_failed": 0, 00:24:50.365 "io_timeout": 0, 00:24:50.365 "avg_latency_us": 14604.762913446419, 00:24:50.365 "min_latency_us": 3009.8014814814815, 00:24:50.365 "max_latency_us": 13107.2 00:24:50.365 } 00:24:50.365 ], 00:24:50.365 "core_count": 1 00:24:50.365 } 00:24:50.365 06:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:50.365 [2024-12-08 06:28:33.124480] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:24:50.365 [2024-12-08 06:28:33.124565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1138556 ] 00:24:50.365 [2024-12-08 06:28:33.194542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.365 [2024-12-08 06:28:33.253086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.365 [2024-12-08 06:28:35.735488] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:50.365 [2024-12-08 06:28:35.735586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.365 [2024-12-08 06:28:35.735611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.365 [2024-12-08 06:28:35.735628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.365 [2024-12-08 06:28:35.735642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.365 [2024-12-08 06:28:35.735657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.365 [2024-12-08 06:28:35.735670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.365 [2024-12-08 06:28:35.735683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.365 [2024-12-08 06:28:35.735707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.365 [2024-12-08 06:28:35.735730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:50.365 [2024-12-08 06:28:35.735787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:50.365 [2024-12-08 06:28:35.735820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b08820 (9): Bad file descriptor 00:24:50.365 [2024-12-08 06:28:35.742232] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:50.365 Running I/O for 1 seconds... 00:24:50.365 8720.00 IOPS, 34.06 MiB/s 00:24:50.365 Latency(us) 00:24:50.365 [2024-12-08T05:28:40.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.365 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:50.365 Verification LBA range: start 0x0 length 0x4000 00:24:50.365 NVMe0n1 : 1.02 8724.85 34.08 0.00 0.00 14604.76 3009.80 13107.20 00:24:50.365 [2024-12-08T05:28:40.484Z] =================================================================================================================== 00:24:50.365 [2024-12-08T05:28:40.484Z] Total : 8724.85 34.08 0.00 0.00 14604.76 3009.80 13107.20 00:24:50.365 06:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:50.365 06:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:50.365 06:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:50.624 06:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:50.624 06:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:50.881 06:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:51.445 06:28:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:54.728 06:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:54.728 06:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:54.728 06:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1138556 00:24:54.728 06:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1138556 ']' 00:24:54.728 06:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1138556 00:24:54.728 06:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:54.728 06:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:54.728 06:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1138556 00:24:54.728 06:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:54.728 06:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:54.728 06:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1138556' 00:24:54.728 killing process with pid 1138556 00:24:54.728 06:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1138556 00:24:54.728 06:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1138556 00:24:54.729 06:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:54.729 06:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:55.293 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:55.293 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:55.293 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:55.293 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:55.293 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:55.293 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:55.293 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:55.293 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:55.293 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:55.293 rmmod nvme_tcp 00:24:55.293 rmmod nvme_fabrics 00:24:55.293 rmmod nvme_keyring 00:24:55.293 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:55.293 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:55.293 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:55.293 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1136280 ']' 00:24:55.293 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1136280 00:24:55.293 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1136280 ']' 00:24:55.293 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1136280 00:24:55.293 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:55.294 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.294 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1136280 00:24:55.294 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:55.294 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:55.294 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1136280' 00:24:55.294 killing process with pid 1136280 00:24:55.294 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1136280 00:24:55.294 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1136280 00:24:55.551 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:55.551 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:55.551 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:55.551 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:55.551 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:55.551 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:55.551 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:55.551 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:55.551 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:55.551 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.551 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.551 06:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.453 06:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:57.453 00:24:57.453 real 0m35.648s 00:24:57.453 user 2m5.731s 00:24:57.454 sys 0m5.999s 00:24:57.454 06:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:57.454 06:28:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.454 ************************************ 00:24:57.454 END TEST nvmf_failover 00:24:57.454 ************************************ 00:24:57.454 06:28:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:57.454 06:28:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:57.454 06:28:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:57.454 06:28:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.712 ************************************ 00:24:57.712 START TEST nvmf_host_discovery 00:24:57.712 ************************************ 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:57.712 * Looking for test storage... 00:24:57.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:57.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.712 --rc genhtml_branch_coverage=1 00:24:57.712 --rc genhtml_function_coverage=1 00:24:57.712 --rc genhtml_legend=1 00:24:57.712 --rc geninfo_all_blocks=1 00:24:57.712 --rc geninfo_unexecuted_blocks=1 00:24:57.712 00:24:57.712 ' 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:57.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.712 --rc genhtml_branch_coverage=1 00:24:57.712 --rc genhtml_function_coverage=1 00:24:57.712 --rc genhtml_legend=1 00:24:57.712 --rc geninfo_all_blocks=1 00:24:57.712 --rc geninfo_unexecuted_blocks=1 00:24:57.712 00:24:57.712 ' 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:57.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.712 --rc genhtml_branch_coverage=1 00:24:57.712 --rc genhtml_function_coverage=1 00:24:57.712 --rc genhtml_legend=1 00:24:57.712 --rc geninfo_all_blocks=1 00:24:57.712 --rc geninfo_unexecuted_blocks=1 00:24:57.712 00:24:57.712 ' 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:57.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.712 --rc genhtml_branch_coverage=1 00:24:57.712 --rc genhtml_function_coverage=1 00:24:57.712 --rc genhtml_legend=1 00:24:57.712 --rc geninfo_all_blocks=1 00:24:57.712 --rc geninfo_unexecuted_blocks=1 00:24:57.712 00:24:57.712 ' 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.712 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:57.713 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:00.243 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:00.243 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.243 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:00.244 Found net devices under 0000:84:00.0: cvl_0_0 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:00.244 Found net devices under 0000:84:00.1: cvl_0_1 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:00.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:25:00.244 00:25:00.244 --- 10.0.0.2 ping statistics --- 00:25:00.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.244 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:25:00.244 00:25:00.244 --- 10.0.0.1 ping statistics --- 00:25:00.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.244 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1141924 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1141924 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1141924 ']' 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.244 06:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.244 [2024-12-08 06:28:49.961442] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:25:00.244 [2024-12-08 06:28:49.961526] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.244 [2024-12-08 06:28:50.043989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.244 [2024-12-08 06:28:50.105838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.244 [2024-12-08 06:28:50.105903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.244 [2024-12-08 06:28:50.105933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.244 [2024-12-08 06:28:50.105945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.244 [2024-12-08 06:28:50.105956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.244 [2024-12-08 06:28:50.106640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.244 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.244 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:00.244 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:00.244 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:00.244 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.244 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.244 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:00.244 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.244 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.244 [2024-12-08 06:28:50.264664] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.244 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.244 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:00.244 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.244 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.244 [2024-12-08 06:28:50.272921] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:00.244 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.245 null0 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.245 null1 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1141999 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1141999 /tmp/host.sock 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1141999 ']' 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:00.245 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.245 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.245 [2024-12-08 06:28:50.347913] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:25:00.245 [2024-12-08 06:28:50.348000] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1141999 ] 00:25:00.503 [2024-12-08 06:28:50.416938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.503 [2024-12-08 06:28:50.474359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.503 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.503 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:00.761 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:00.761 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:00.762 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.021 [2024-12-08 06:28:50.918615] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:01.021 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.021 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:01.021 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:01.021 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:01.021 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:01.021 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:01.022 06:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:01.586 [2024-12-08 06:28:51.681928] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:01.586 [2024-12-08 06:28:51.681954] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:01.586 [2024-12-08 06:28:51.681977] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:01.843 [2024-12-08 06:28:51.768250] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:01.843 [2024-12-08 06:28:51.944358] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:01.843 [2024-12-08 06:28:51.945417] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1bcd0d0:1 started. 00:25:01.843 [2024-12-08 06:28:51.947136] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:01.843 [2024-12-08 06:28:51.947157] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:01.843 [2024-12-08 06:28:51.951050] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1bcd0d0 was disconnected and freed. delete nvme_qpair. 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.102 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.361 [2024-12-08 06:28:52.257151] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1bcd300:1 started. 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:02.361 [2024-12-08 06:28:52.261267] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1bcd300 was disconnected and freed. delete nvme_qpair. 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:02.361 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.362 [2024-12-08 06:28:52.350878] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:02.362 [2024-12-08 06:28:52.351259] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:02.362 [2024-12-08 06:28:52.351287] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.362 [2024-12-08 06:28:52.437523] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:02.362 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.620 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:02.620 06:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:02.620 [2024-12-08 06:28:52.496257] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:02.620 [2024-12-08 06:28:52.496301] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:02.620 [2024-12-08 06:28:52.496315] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:02.620 [2024-12-08 06:28:52.496323] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.556 [2024-12-08 06:28:53.583433] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:03.556 [2024-12-08 06:28:53.583475] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:03.556 [2024-12-08 06:28:53.588573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.556 [2024-12-08 06:28:53.588621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.556 [2024-12-08 06:28:53.588639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.556 [2024-12-08 06:28:53.588652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.556 [2024-12-08 06:28:53.588665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.556 [2024-12-08 06:28:53.588678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.556 [2024-12-08 06:28:53.588691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.556 [2024-12-08 06:28:53.588704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.556 [2024-12-08 06:28:53.588717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9d710 is same with the state(6) to be set 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:03.556 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.557 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:03.557 [2024-12-08 06:28:53.598579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9d710 (9): Bad file descriptor 00:25:03.557 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.557 [2024-12-08 06:28:53.608618] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:03.557 [2024-12-08 06:28:53.608639] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:03.557 [2024-12-08 06:28:53.608652] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:03.557 [2024-12-08 06:28:53.608662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:03.557 [2024-12-08 06:28:53.608710] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:03.557 [2024-12-08 06:28:53.608918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-12-08 06:28:53.608947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9d710 with addr=10.0.0.2, port=4420 00:25:03.557 [2024-12-08 06:28:53.608964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9d710 is same with the state(6) to be set 00:25:03.557 [2024-12-08 06:28:53.608987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9d710 (9): Bad file descriptor 00:25:03.557 [2024-12-08 06:28:53.609009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:03.557 [2024-12-08 06:28:53.609023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:03.557 [2024-12-08 06:28:53.609041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:03.557 [2024-12-08 06:28:53.609068] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:03.557 [2024-12-08 06:28:53.609086] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:03.557 [2024-12-08 06:28:53.609094] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:03.557 [2024-12-08 06:28:53.618742] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:03.557 [2024-12-08 06:28:53.618762] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:03.557 [2024-12-08 06:28:53.618770] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:03.557 [2024-12-08 06:28:53.618777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:03.557 [2024-12-08 06:28:53.618815] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:03.557 [2024-12-08 06:28:53.618973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-12-08 06:28:53.618999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9d710 with addr=10.0.0.2, port=4420 00:25:03.557 [2024-12-08 06:28:53.619028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9d710 is same with the state(6) to be set 00:25:03.557 [2024-12-08 06:28:53.619048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9d710 (9): Bad file descriptor 00:25:03.557 [2024-12-08 06:28:53.619067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:03.557 [2024-12-08 06:28:53.619079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:03.557 [2024-12-08 06:28:53.619091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:03.557 [2024-12-08 06:28:53.619102] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:03.557 [2024-12-08 06:28:53.619110] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:03.557 [2024-12-08 06:28:53.619117] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:03.557 [2024-12-08 06:28:53.628850] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:03.557 [2024-12-08 06:28:53.628872] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:03.557 [2024-12-08 06:28:53.628881] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:03.557 [2024-12-08 06:28:53.628888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:03.557 [2024-12-08 06:28:53.628927] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:03.557 [2024-12-08 06:28:53.629108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-12-08 06:28:53.629134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9d710 with addr=10.0.0.2, port=4420 00:25:03.557 [2024-12-08 06:28:53.629149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9d710 is same with the state(6) to be set 00:25:03.557 [2024-12-08 06:28:53.629169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9d710 (9): Bad file descriptor 00:25:03.557 [2024-12-08 06:28:53.629200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:03.557 [2024-12-08 06:28:53.629216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:03.557 [2024-12-08 06:28:53.629228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:03.557 [2024-12-08 06:28:53.629241] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:03.557 [2024-12-08 06:28:53.629253] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:03.557 [2024-12-08 06:28:53.629261] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:03.557 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.557 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.557 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:03.557 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:03.557 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.557 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.557 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:03.557 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:03.557 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.557 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:03.557 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:03.557 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.557 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.557 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:03.557 [2024-12-08 06:28:53.638976] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:03.557 [2024-12-08 06:28:53.639024] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:03.557 [2024-12-08 06:28:53.639033] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:03.557 [2024-12-08 06:28:53.639040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:03.557 [2024-12-08 06:28:53.639064] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:03.557 [2024-12-08 06:28:53.639235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.557 [2024-12-08 06:28:53.639261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9d710 with addr=10.0.0.2, port=4420 00:25:03.557 [2024-12-08 06:28:53.639276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9d710 is same with the state(6) to be set 00:25:03.557 [2024-12-08 06:28:53.639296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9d710 (9): Bad file descriptor 00:25:03.557 [2024-12-08 06:28:53.639326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:03.557 [2024-12-08 06:28:53.639342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:03.557 [2024-12-08 06:28:53.639355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:03.557 [2024-12-08 06:28:53.639366] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:03.557 [2024-12-08 06:28:53.639374] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:03.557 [2024-12-08 06:28:53.639381] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:03.557 [2024-12-08 06:28:53.649098] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:03.557 [2024-12-08 06:28:53.649127] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:03.557 [2024-12-08 06:28:53.649136] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:03.557 [2024-12-08 06:28:53.649144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:03.557 [2024-12-08 06:28:53.649184] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:03.557 [2024-12-08 06:28:53.649342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-12-08 06:28:53.649368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9d710 with addr=10.0.0.2, port=4420 00:25:03.558 [2024-12-08 06:28:53.649383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9d710 is same with the state(6) to be set 00:25:03.558 [2024-12-08 06:28:53.649403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9d710 (9): Bad file descriptor 00:25:03.558 [2024-12-08 06:28:53.649434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:03.558 [2024-12-08 06:28:53.649451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:03.558 [2024-12-08 06:28:53.649464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:03.558 [2024-12-08 06:28:53.649475] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:03.558 [2024-12-08 06:28:53.649484] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:03.558 [2024-12-08 06:28:53.649491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:03.558 [2024-12-08 06:28:53.659217] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:03.558 [2024-12-08 06:28:53.659237] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:03.558 [2024-12-08 06:28:53.659246] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:03.558 [2024-12-08 06:28:53.659253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:03.558 [2024-12-08 06:28:53.659291] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:03.558 [2024-12-08 06:28:53.659453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-12-08 06:28:53.659480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9d710 with addr=10.0.0.2, port=4420 00:25:03.558 [2024-12-08 06:28:53.659495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9d710 is same with the state(6) to be set 00:25:03.558 [2024-12-08 06:28:53.659515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9d710 (9): Bad file descriptor 00:25:03.558 [2024-12-08 06:28:53.659547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:03.558 [2024-12-08 06:28:53.659564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:03.558 [2024-12-08 06:28:53.659577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:03.558 [2024-12-08 06:28:53.659588] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:03.558 [2024-12-08 06:28:53.659596] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:03.558 [2024-12-08 06:28:53.659603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:03.558 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.558 [2024-12-08 06:28:53.669325] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:03.558 [2024-12-08 06:28:53.669346] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:03.558 [2024-12-08 06:28:53.669354] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:03.558 [2024-12-08 06:28:53.669361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:03.558 [2024-12-08 06:28:53.669400] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:03.558 [2024-12-08 06:28:53.669537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.558 [2024-12-08 06:28:53.669574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9d710 with addr=10.0.0.2, port=4420 00:25:03.558 [2024-12-08 06:28:53.669590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9d710 is same with the state(6) to be set 00:25:03.558 [2024-12-08 06:28:53.669610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9d710 (9): Bad file descriptor 00:25:03.558 [2024-12-08 06:28:53.669658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:03.558 [2024-12-08 06:28:53.669675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:03.558 [2024-12-08 06:28:53.669688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:03.558 [2024-12-08 06:28:53.669700] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:03.558 [2024-12-08 06:28:53.669708] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:03.558 [2024-12-08 06:28:53.669715] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:03.558 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:03.558 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.558 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:03.558 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:03.558 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.558 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.558 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:03.817 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:03.817 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:03.817 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:03.817 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.817 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:03.817 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.817 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:03.817 [2024-12-08 06:28:53.679432] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:03.817 [2024-12-08 06:28:53.679459] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:03.817 [2024-12-08 06:28:53.679484] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:03.817 [2024-12-08 06:28:53.679491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:03.817 [2024-12-08 06:28:53.679516] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:03.817 [2024-12-08 06:28:53.679695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.817 [2024-12-08 06:28:53.679732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9d710 with addr=10.0.0.2, port=4420 00:25:03.817 [2024-12-08 06:28:53.679750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9d710 is same with the state(6) to be set 00:25:03.817 [2024-12-08 06:28:53.679772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9d710 (9): Bad file descriptor 00:25:03.817 [2024-12-08 06:28:53.679806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:03.817 [2024-12-08 06:28:53.679824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:03.817 [2024-12-08 06:28:53.679837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:03.817 [2024-12-08 06:28:53.679850] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:03.817 [2024-12-08 06:28:53.679859] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:03.817 [2024-12-08 06:28:53.679866] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:03.817 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.817 [2024-12-08 06:28:53.689548] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:03.817 [2024-12-08 06:28:53.689569] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:03.817 [2024-12-08 06:28:53.689577] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:03.817 [2024-12-08 06:28:53.689584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:03.817 [2024-12-08 06:28:53.689623] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:03.817 [2024-12-08 06:28:53.689787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.817 [2024-12-08 06:28:53.689815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9d710 with addr=10.0.0.2, port=4420 00:25:03.817 [2024-12-08 06:28:53.689831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9d710 is same with the state(6) to be set 00:25:03.817 [2024-12-08 06:28:53.689852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9d710 (9): Bad file descriptor 00:25:03.817 [2024-12-08 06:28:53.689885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:03.817 [2024-12-08 06:28:53.689903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:03.817 [2024-12-08 06:28:53.689916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:03.817 [2024-12-08 06:28:53.689928] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:03.817 [2024-12-08 06:28:53.689937] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:03.817 [2024-12-08 06:28:53.689944] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:03.817 [2024-12-08 06:28:53.699656] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:03.817 [2024-12-08 06:28:53.699675] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:03.817 [2024-12-08 06:28:53.699683] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:03.817 [2024-12-08 06:28:53.699690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:03.817 [2024-12-08 06:28:53.699747] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:03.817 [2024-12-08 06:28:53.699915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.817 [2024-12-08 06:28:53.699942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9d710 with addr=10.0.0.2, port=4420 00:25:03.817 [2024-12-08 06:28:53.699958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9d710 is same with the state(6) to be set 00:25:03.817 [2024-12-08 06:28:53.699980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9d710 (9): Bad file descriptor 00:25:03.817 [2024-12-08 06:28:53.700028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:03.817 [2024-12-08 06:28:53.700044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:03.817 [2024-12-08 06:28:53.700057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:03.817 [2024-12-08 06:28:53.700083] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:03.817 [2024-12-08 06:28:53.700091] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:03.817 [2024-12-08 06:28:53.700098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:03.817 [2024-12-08 06:28:53.709783] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:03.817 [2024-12-08 06:28:53.709803] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:03.817 [2024-12-08 06:28:53.709811] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:03.817 [2024-12-08 06:28:53.709818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:03.817 [2024-12-08 06:28:53.709841] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:03.817 [2024-12-08 06:28:53.709959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.817 [2024-12-08 06:28:53.709984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9d710 with addr=10.0.0.2, port=4420 00:25:03.817 [2024-12-08 06:28:53.709998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9d710 is same with the state(6) to be set 00:25:03.817 [2024-12-08 06:28:53.710018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9d710 (9): Bad file descriptor 00:25:03.817 [2024-12-08 06:28:53.710048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:03.817 [2024-12-08 06:28:53.710065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:03.817 [2024-12-08 06:28:53.710077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:03.817 [2024-12-08 06:28:53.710088] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:03.817 [2024-12-08 06:28:53.710096] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:03.817 [2024-12-08 06:28:53.710107] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:03.817 [2024-12-08 06:28:53.710232] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:03.817 [2024-12-08 06:28:53.710258] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:03.817 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:25:03.817 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:04.752 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:04.753 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.011 06:28:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.942 [2024-12-08 06:28:55.932847] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:05.942 [2024-12-08 06:28:55.932872] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:05.942 [2024-12-08 06:28:55.932895] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:05.942 [2024-12-08 06:28:56.019162] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:06.201 [2024-12-08 06:28:56.077895] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:06.201 [2024-12-08 06:28:56.078694] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1bb25e0:1 started. 00:25:06.201 [2024-12-08 06:28:56.080783] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:06.201 [2024-12-08 06:28:56.080814] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:06.201 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.201 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:06.201 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:06.201 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:06.201 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:06.201 [2024-12-08 06:28:56.082429] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1bb25e0 was disconnected and freed. delete nvme_qpair. 00:25:06.201 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:06.201 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:06.201 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:06.201 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:06.201 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.201 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.201 request: 00:25:06.201 { 00:25:06.201 "name": "nvme", 00:25:06.201 "trtype": "tcp", 00:25:06.201 "traddr": "10.0.0.2", 00:25:06.201 "adrfam": "ipv4", 00:25:06.201 "trsvcid": "8009", 00:25:06.201 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:06.201 "wait_for_attach": true, 00:25:06.201 "method": "bdev_nvme_start_discovery", 00:25:06.201 "req_id": 1 00:25:06.201 } 00:25:06.201 Got JSON-RPC error response 00:25:06.201 response: 00:25:06.201 { 00:25:06.201 "code": -17, 00:25:06.201 "message": "File exists" 00:25:06.201 } 00:25:06.201 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:06.201 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.202 request: 00:25:06.202 { 00:25:06.202 "name": "nvme_second", 00:25:06.202 "trtype": "tcp", 00:25:06.202 "traddr": "10.0.0.2", 00:25:06.202 "adrfam": "ipv4", 00:25:06.202 "trsvcid": "8009", 00:25:06.202 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:06.202 "wait_for_attach": true, 00:25:06.202 "method": "bdev_nvme_start_discovery", 00:25:06.202 "req_id": 1 00:25:06.202 } 00:25:06.202 Got JSON-RPC error response 00:25:06.202 response: 00:25:06.202 { 00:25:06.202 "code": -17, 00:25:06.202 "message": "File exists" 00:25:06.202 } 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.202 06:28:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.576 [2024-12-08 06:28:57.296329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.576 [2024-12-08 06:28:57.296415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb7520 with addr=10.0.0.2, port=8010 00:25:07.576 [2024-12-08 06:28:57.296448] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:07.576 [2024-12-08 06:28:57.296463] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:07.576 [2024-12-08 06:28:57.296476] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:08.508 [2024-12-08 06:28:58.298640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.508 [2024-12-08 06:28:58.298693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb7520 with addr=10.0.0.2, port=8010 00:25:08.508 [2024-12-08 06:28:58.298716] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:08.508 [2024-12-08 06:28:58.298739] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:08.509 [2024-12-08 06:28:58.298779] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:09.527 [2024-12-08 06:28:59.300832] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:09.527 request: 00:25:09.527 { 00:25:09.527 "name": "nvme_second", 00:25:09.527 "trtype": "tcp", 00:25:09.527 "traddr": "10.0.0.2", 00:25:09.527 "adrfam": "ipv4", 00:25:09.527 "trsvcid": "8010", 00:25:09.527 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:09.527 "wait_for_attach": false, 00:25:09.527 "attach_timeout_ms": 3000, 00:25:09.527 "method": "bdev_nvme_start_discovery", 00:25:09.527 "req_id": 1 00:25:09.527 } 00:25:09.527 Got JSON-RPC error response 00:25:09.527 response: 00:25:09.527 { 00:25:09.527 "code": -110, 00:25:09.527 "message": "Connection timed out" 00:25:09.527 } 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1141999 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:09.527 rmmod nvme_tcp 00:25:09.527 rmmod nvme_fabrics 00:25:09.527 rmmod nvme_keyring 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1141924 ']' 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1141924 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1141924 ']' 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1141924 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1141924 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1141924' 00:25:09.527 killing process with pid 1141924 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1141924 00:25:09.527 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1141924 00:25:09.786 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:09.786 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:09.786 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:09.786 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:09.786 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:09.786 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:09.786 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:09.786 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.786 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:09.786 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.786 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.786 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.684 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:11.684 00:25:11.684 real 0m14.118s 00:25:11.684 user 0m20.893s 00:25:11.684 sys 0m2.878s 00:25:11.684 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:11.684 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.684 ************************************ 00:25:11.684 END TEST nvmf_host_discovery 00:25:11.684 ************************************ 00:25:11.684 06:29:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:11.684 06:29:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:11.684 06:29:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:11.684 06:29:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.684 ************************************ 00:25:11.684 START TEST nvmf_host_multipath_status 00:25:11.684 ************************************ 00:25:11.684 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:11.941 * Looking for test storage... 00:25:11.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:11.941 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:11.941 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:25:11.941 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:11.941 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:11.941 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.941 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.941 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.941 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.941 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.941 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.941 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.941 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.941 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.941 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.941 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.941 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:11.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.942 --rc genhtml_branch_coverage=1 00:25:11.942 --rc genhtml_function_coverage=1 00:25:11.942 --rc genhtml_legend=1 00:25:11.942 --rc geninfo_all_blocks=1 00:25:11.942 --rc geninfo_unexecuted_blocks=1 00:25:11.942 00:25:11.942 ' 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:11.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.942 --rc genhtml_branch_coverage=1 00:25:11.942 --rc genhtml_function_coverage=1 00:25:11.942 --rc genhtml_legend=1 00:25:11.942 --rc geninfo_all_blocks=1 00:25:11.942 --rc geninfo_unexecuted_blocks=1 00:25:11.942 00:25:11.942 ' 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:11.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.942 --rc genhtml_branch_coverage=1 00:25:11.942 --rc genhtml_function_coverage=1 00:25:11.942 --rc genhtml_legend=1 00:25:11.942 --rc geninfo_all_blocks=1 00:25:11.942 --rc geninfo_unexecuted_blocks=1 00:25:11.942 00:25:11.942 ' 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:11.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.942 --rc genhtml_branch_coverage=1 00:25:11.942 --rc genhtml_function_coverage=1 00:25:11.942 --rc genhtml_legend=1 00:25:11.942 --rc geninfo_all_blocks=1 00:25:11.942 --rc geninfo_unexecuted_blocks=1 00:25:11.942 00:25:11.942 ' 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.942 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.943 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.943 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:11.943 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:11.943 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.943 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:14.468 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.468 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:14.468 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:14.468 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:14.468 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:14.468 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:14.468 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:14.468 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:14.468 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:14.468 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:14.468 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:14.468 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:14.468 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:14.468 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:14.468 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:14.468 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.468 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:14.469 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:14.469 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:14.469 Found net devices under 0000:84:00.0: cvl_0_0 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:14.469 Found net devices under 0000:84:00.1: cvl_0_1 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.469 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:14.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:25:14.470 00:25:14.470 --- 10.0.0.2 ping statistics --- 00:25:14.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.470 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:25:14.470 00:25:14.470 --- 10.0.0.1 ping statistics --- 00:25:14.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.470 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1145196 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1145196 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1145196 ']' 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:14.470 [2024-12-08 06:29:04.304378] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:25:14.470 [2024-12-08 06:29:04.304458] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.470 [2024-12-08 06:29:04.374901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:14.470 [2024-12-08 06:29:04.430740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.470 [2024-12-08 06:29:04.430797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.470 [2024-12-08 06:29:04.430826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.470 [2024-12-08 06:29:04.430838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.470 [2024-12-08 06:29:04.430848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.470 [2024-12-08 06:29:04.432379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.470 [2024-12-08 06:29:04.432385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1145196 00:25:14.470 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:14.728 [2024-12-08 06:29:04.818321] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.728 06:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:15.293 Malloc0 00:25:15.293 06:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:15.550 06:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:15.808 06:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.066 [2024-12-08 06:29:05.983387] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.067 06:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:16.330 [2024-12-08 06:29:06.252153] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:16.330 06:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1145478 00:25:16.331 06:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:16.331 06:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1145478 /var/tmp/bdevperf.sock 00:25:16.331 06:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1145478 ']' 00:25:16.331 06:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:16.331 06:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:16.331 06:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.331 06:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:16.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:16.331 06:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.331 06:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:16.590 06:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.590 06:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:16.590 06:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:16.848 06:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:17.415 Nvme0n1 00:25:17.415 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:17.673 Nvme0n1 00:25:17.673 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:17.673 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:20.202 06:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:20.202 06:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:20.202 06:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:20.460 06:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:21.396 06:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:21.396 06:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:21.396 06:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.396 06:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:21.655 06:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.655 06:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:21.655 06:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.655 06:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:21.913 06:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:21.913 06:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:21.913 06:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.913 06:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:22.172 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.172 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:22.172 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.172 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:22.430 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.430 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:22.430 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.430 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:22.996 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.996 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:22.996 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.996 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:23.254 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.254 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:23.254 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:23.511 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:23.768 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:24.701 06:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:24.701 06:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:24.701 06:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.701 06:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:25.264 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:25.264 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:25.264 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.264 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:25.520 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.520 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:25.520 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.520 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:25.777 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.777 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:25.777 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.777 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:26.033 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.033 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:26.033 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.033 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:26.291 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.291 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:26.291 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.291 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:26.549 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.549 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:26.549 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:27.116 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:27.374 06:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:28.308 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:28.308 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:28.308 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.308 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:28.566 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.566 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:28.566 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.566 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:28.825 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:28.825 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:28.825 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.825 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:29.083 06:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.083 06:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:29.083 06:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.083 06:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:29.650 06:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.650 06:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:29.650 06:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.650 06:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:29.909 06:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.909 06:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:29.909 06:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.909 06:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:30.168 06:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.168 06:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:30.168 06:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:30.427 06:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:30.685 06:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:31.618 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:31.618 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:31.618 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.618 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:32.183 06:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.183 06:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:32.183 06:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.183 06:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:32.442 06:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.442 06:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:32.442 06:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.442 06:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:32.700 06:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.700 06:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.700 06:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.700 06:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:32.959 06:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.959 06:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:32.959 06:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.959 06:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:33.217 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.217 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:33.217 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.217 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:33.474 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:33.474 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:33.474 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:34.041 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:34.041 06:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:35.415 06:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:35.415 06:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:35.415 06:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.415 06:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:35.415 06:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:35.415 06:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:35.415 06:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.415 06:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:35.672 06:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:35.672 06:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:35.672 06:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.672 06:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:35.929 06:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.929 06:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:35.930 06:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.930 06:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:36.221 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.221 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:36.221 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.221 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:36.494 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:36.494 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:36.494 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.495 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:36.753 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:36.753 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:36.753 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:37.010 06:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:37.268 06:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:38.644 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:38.644 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:38.644 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.644 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:38.644 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:38.644 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:38.644 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.644 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:38.901 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.901 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:38.901 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.901 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:39.467 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.467 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:39.467 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.467 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:39.724 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.724 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:39.724 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.724 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:39.983 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:39.983 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:39.983 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.983 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:40.241 06:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.241 06:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:40.498 06:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:40.498 06:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:41.064 06:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:41.322 06:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:42.252 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:42.252 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:42.252 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.252 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:42.509 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.509 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:42.509 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.509 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:42.766 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.766 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:42.766 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.766 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:43.334 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.334 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:43.334 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.334 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:43.593 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.593 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:43.593 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.593 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:43.878 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.878 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:43.878 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.878 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:44.137 06:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.137 06:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:44.137 06:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:44.395 06:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:44.654 06:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:46.029 06:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:46.029 06:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:46.029 06:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.029 06:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:46.029 06:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:46.029 06:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:46.029 06:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.029 06:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:46.287 06:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.287 06:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:46.287 06:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.287 06:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:46.854 06:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.854 06:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:46.854 06:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.854 06:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:47.113 06:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.113 06:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:47.113 06:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.113 06:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:47.372 06:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.372 06:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:47.372 06:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.372 06:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:47.631 06:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.631 06:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:47.631 06:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:47.889 06:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:48.147 06:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:49.525 06:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:49.525 06:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:49.525 06:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.525 06:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:49.525 06:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.525 06:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:49.525 06:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.525 06:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:49.784 06:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.784 06:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:49.784 06:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.784 06:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:50.351 06:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.351 06:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:50.351 06:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.351 06:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:50.609 06:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.609 06:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:50.610 06:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.610 06:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:50.868 06:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.868 06:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:50.868 06:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.868 06:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:51.127 06:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.127 06:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:51.127 06:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:51.386 06:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:51.644 06:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:53.022 06:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:53.022 06:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:53.022 06:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.022 06:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:53.022 06:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.022 06:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:53.022 06:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.022 06:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:53.281 06:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.281 06:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:53.281 06:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.281 06:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:53.847 06:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.847 06:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:53.847 06:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.847 06:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:54.105 06:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.105 06:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:54.105 06:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.105 06:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:54.364 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.364 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:54.364 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.364 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:54.622 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:54.622 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1145478 00:25:54.622 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1145478 ']' 00:25:54.622 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1145478 00:25:54.622 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:54.622 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:54.622 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1145478 00:25:54.622 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:54.622 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:54.622 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1145478' 00:25:54.622 killing process with pid 1145478 00:25:54.622 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1145478 00:25:54.622 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1145478 00:25:54.622 { 00:25:54.622 "results": [ 00:25:54.622 { 00:25:54.622 "job": "Nvme0n1", 00:25:54.622 "core_mask": "0x4", 00:25:54.622 "workload": "verify", 00:25:54.622 "status": "terminated", 00:25:54.622 "verify_range": { 00:25:54.622 "start": 0, 00:25:54.622 "length": 16384 00:25:54.622 }, 00:25:54.622 "queue_depth": 128, 00:25:54.622 "io_size": 4096, 00:25:54.622 "runtime": 36.741473, 00:25:54.622 "iops": 8424.839145670616, 00:25:54.622 "mibps": 32.90952791277584, 00:25:54.622 "io_failed": 0, 00:25:54.622 "io_timeout": 0, 00:25:54.622 "avg_latency_us": 15169.833388337116, 00:25:54.622 "min_latency_us": 257.89629629629627, 00:25:54.622 "max_latency_us": 4026531.84 00:25:54.622 } 00:25:54.622 ], 00:25:54.622 "core_count": 1 00:25:54.622 } 00:25:54.892 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1145478 00:25:54.892 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:54.892 [2024-12-08 06:29:06.320425] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:25:54.892 [2024-12-08 06:29:06.320513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145478 ] 00:25:54.892 [2024-12-08 06:29:06.390468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.892 [2024-12-08 06:29:06.449474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:54.892 Running I/O for 90 seconds... 00:25:54.892 8675.00 IOPS, 33.89 MiB/s [2024-12-08T05:29:45.011Z] 8863.50 IOPS, 34.62 MiB/s [2024-12-08T05:29:45.011Z] 8873.67 IOPS, 34.66 MiB/s [2024-12-08T05:29:45.011Z] 8900.25 IOPS, 34.77 MiB/s [2024-12-08T05:29:45.011Z] 8888.00 IOPS, 34.72 MiB/s [2024-12-08T05:29:45.011Z] 8838.00 IOPS, 34.52 MiB/s [2024-12-08T05:29:45.011Z] 8854.00 IOPS, 34.59 MiB/s [2024-12-08T05:29:45.011Z] 8861.25 IOPS, 34.61 MiB/s [2024-12-08T05:29:45.011Z] 8853.00 IOPS, 34.58 MiB/s [2024-12-08T05:29:45.011Z] 8847.10 IOPS, 34.56 MiB/s [2024-12-08T05:29:45.011Z] 8856.18 IOPS, 34.59 MiB/s [2024-12-08T05:29:45.011Z] 8859.67 IOPS, 34.61 MiB/s [2024-12-08T05:29:45.011Z] 8845.46 IOPS, 34.55 MiB/s [2024-12-08T05:29:45.011Z] 8856.57 IOPS, 34.60 MiB/s [2024-12-08T05:29:45.011Z] 8853.80 IOPS, 34.59 MiB/s [2024-12-08T05:29:45.011Z] 8861.94 IOPS, 34.62 MiB/s [2024-12-08T05:29:45.011Z] [2024-12-08 06:29:23.864150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.892 [2024-12-08 06:29:23.864216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.864306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.892 [2024-12-08 06:29:23.864328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.864353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.892 [2024-12-08 06:29:23.864371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.864393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.892 [2024-12-08 06:29:23.864410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.864432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.892 [2024-12-08 06:29:23.864449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.864471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.892 [2024-12-08 06:29:23.864488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.864510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.892 [2024-12-08 06:29:23.864526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.864548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.892 [2024-12-08 06:29:23.864565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.864587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.892 [2024-12-08 06:29:23.864604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.864641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.892 [2024-12-08 06:29:23.864658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.864680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.892 [2024-12-08 06:29:23.864697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.864747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.892 [2024-12-08 06:29:23.864765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.864787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.892 [2024-12-08 06:29:23.864805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.864828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.892 [2024-12-08 06:29:23.864845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.864868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.892 [2024-12-08 06:29:23.864885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.864992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.892 [2024-12-08 06:29:23.865028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.865055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.892 [2024-12-08 06:29:23.865073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.865096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.892 [2024-12-08 06:29:23.865113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.865135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.892 [2024-12-08 06:29:23.865151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:54.892 [2024-12-08 06:29:23.865173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.892 [2024-12-08 06:29:23.865189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.865966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.865989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.866005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.866044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.866061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.866083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.866099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.866121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.866137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.866159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.866174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.866197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.866213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.866235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.866255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.867906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.893 [2024-12-08 06:29:23.867931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.867976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.893 [2024-12-08 06:29:23.867994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.868022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.893 [2024-12-08 06:29:23.868054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.868081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.893 [2024-12-08 06:29:23.868098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.868124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.893 [2024-12-08 06:29:23.868141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.868167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.893 [2024-12-08 06:29:23.868184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.868209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.893 [2024-12-08 06:29:23.868226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.868252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.893 [2024-12-08 06:29:23.868269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.868471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.893 [2024-12-08 06:29:23.868495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.868528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.893 [2024-12-08 06:29:23.868546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.868575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.893 [2024-12-08 06:29:23.868592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.868620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.893 [2024-12-08 06:29:23.868636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:54.893 [2024-12-08 06:29:23.868680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:23.868697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:23.868756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:23.868775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:23.868804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:23.868821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:23.868850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:23.868867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:23.868897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:23.868914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:23.868943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.894 [2024-12-08 06:29:23.868960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:23.868989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.894 [2024-12-08 06:29:23.869008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:23.869052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.894 [2024-12-08 06:29:23.869069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:23.869097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.894 [2024-12-08 06:29:23.869113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:23.869141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.894 [2024-12-08 06:29:23.869158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:23.869186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.894 [2024-12-08 06:29:23.869202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:23.869230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.894 [2024-12-08 06:29:23.869246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:23.869280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.894 [2024-12-08 06:29:23.869297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:23.869326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:23.869342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:23.869371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:23.869388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:54.894 8344.94 IOPS, 32.60 MiB/s [2024-12-08T05:29:45.013Z] 7881.33 IOPS, 30.79 MiB/s [2024-12-08T05:29:45.013Z] 7466.53 IOPS, 29.17 MiB/s [2024-12-08T05:29:45.013Z] 7093.20 IOPS, 27.71 MiB/s [2024-12-08T05:29:45.013Z] 7183.48 IOPS, 28.06 MiB/s [2024-12-08T05:29:45.013Z] 7264.95 IOPS, 28.38 MiB/s [2024-12-08T05:29:45.013Z] 7325.78 IOPS, 28.62 MiB/s [2024-12-08T05:29:45.013Z] 7500.50 IOPS, 29.30 MiB/s [2024-12-08T05:29:45.013Z] 7663.36 IOPS, 29.93 MiB/s [2024-12-08T05:29:45.013Z] 7819.81 IOPS, 30.55 MiB/s [2024-12-08T05:29:45.013Z] 7925.70 IOPS, 30.96 MiB/s [2024-12-08T05:29:45.013Z] 7963.50 IOPS, 31.11 MiB/s [2024-12-08T05:29:45.013Z] 7992.72 IOPS, 31.22 MiB/s [2024-12-08T05:29:45.013Z] 8021.17 IOPS, 31.33 MiB/s [2024-12-08T05:29:45.013Z] 8105.32 IOPS, 31.66 MiB/s [2024-12-08T05:29:45.013Z] 8215.50 IOPS, 32.09 MiB/s [2024-12-08T05:29:45.013Z] 8313.55 IOPS, 32.47 MiB/s [2024-12-08T05:29:45.013Z] [2024-12-08 06:29:41.725028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.725132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.725200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.725253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.725295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.894 [2024-12-08 06:29:41.725334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.894 [2024-12-08 06:29:41.725373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.725411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.725465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.725505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.725544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.725582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.725621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.725659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.725697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.725763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.725819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.725862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.725903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.725946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.725969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.725987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:54.894 [2024-12-08 06:29:41.726020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.894 [2024-12-08 06:29:41.726038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.726061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.726078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.726116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.726134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.726156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.726187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.726209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.726225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.726247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.726264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.726285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.726301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.726322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.726338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.726360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.726376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.726397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.895 [2024-12-08 06:29:41.726413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.726435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.895 [2024-12-08 06:29:41.726453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.727315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.727339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.727371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.727390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.727412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.727441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.727463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.727480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.727501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.727518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.727539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.727556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.727577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.727594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.727615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.727632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.727653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.727669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.727691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.727731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.727759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.727776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.727798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.895 [2024-12-08 06:29:41.727815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.727837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.895 [2024-12-08 06:29:41.727854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.727876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.895 [2024-12-08 06:29:41.727897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.727921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.727938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.727961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.727978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.728000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.728017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.728041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.728058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.728096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.895 [2024-12-08 06:29:41.728113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.728135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.895 [2024-12-08 06:29:41.728151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.728172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.895 [2024-12-08 06:29:41.728189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.728210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.895 [2024-12-08 06:29:41.728226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.728248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.895 [2024-12-08 06:29:41.728264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.728286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.895 [2024-12-08 06:29:41.728301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.728323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.728339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.728360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.728380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.728403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.728419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.728440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.728456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:54.895 [2024-12-08 06:29:41.728478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.895 [2024-12-08 06:29:41.728504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.728525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.896 [2024-12-08 06:29:41.728540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.728562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.896 [2024-12-08 06:29:41.728578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.728600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.896 [2024-12-08 06:29:41.728616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.728637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.896 [2024-12-08 06:29:41.728653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.728674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.896 [2024-12-08 06:29:41.728690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.728735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.896 [2024-12-08 06:29:41.728754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.728777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.896 [2024-12-08 06:29:41.728793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.728819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.896 [2024-12-08 06:29:41.728836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.728857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.896 [2024-12-08 06:29:41.728875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.728902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.896 [2024-12-08 06:29:41.728919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.728941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.896 [2024-12-08 06:29:41.728958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.728980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.896 [2024-12-08 06:29:41.728996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.896 [2024-12-08 06:29:41.730096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.896 [2024-12-08 06:29:41.730140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.896 [2024-12-08 06:29:41.730178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.896 [2024-12-08 06:29:41.730216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.896 [2024-12-08 06:29:41.730255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.896 [2024-12-08 06:29:41.730294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.896 [2024-12-08 06:29:41.730331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.896 [2024-12-08 06:29:41.730368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.896 [2024-12-08 06:29:41.730405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.896 [2024-12-08 06:29:41.730449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.896 [2024-12-08 06:29:41.730485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.896 [2024-12-08 06:29:41.730522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.896 [2024-12-08 06:29:41.730559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.896 [2024-12-08 06:29:41.730596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.896 [2024-12-08 06:29:41.730633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.896 [2024-12-08 06:29:41.730671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.896 [2024-12-08 06:29:41.730708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.896 [2024-12-08 06:29:41.730774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.896 [2024-12-08 06:29:41.730812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.896 [2024-12-08 06:29:41.730850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:54.896 [2024-12-08 06:29:41.730872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.897 [2024-12-08 06:29:41.730888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.730910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.897 [2024-12-08 06:29:41.730930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.730953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.730970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.732215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.732260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.732299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.732336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.732373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.732411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.732448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.732485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.897 [2024-12-08 06:29:41.732522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.897 [2024-12-08 06:29:41.732558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.897 [2024-12-08 06:29:41.732599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.897 [2024-12-08 06:29:41.732638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.897 [2024-12-08 06:29:41.732675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.732712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.732778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.897 [2024-12-08 06:29:41.732817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.897 [2024-12-08 06:29:41.732855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.732893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.732948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.732972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.732989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.733012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.897 [2024-12-08 06:29:41.733029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.733067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.897 [2024-12-08 06:29:41.733093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.733114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.733130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.733155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.733172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.733194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.733209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.733231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.733247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.733268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.733283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.733311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.733328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.733350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.733365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.733387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.733402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.733424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.733439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.733461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.733476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.733498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.897 [2024-12-08 06:29:41.733513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.733535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.897 [2024-12-08 06:29:41.733551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.733572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.897 [2024-12-08 06:29:41.733593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:54.897 [2024-12-08 06:29:41.733619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.733636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.733657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.733673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.733694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.733731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.733758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.733790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.733814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.733831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.733854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.733871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.736831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.736856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.736884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.736903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.736926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.736943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.736965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.736982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.737020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.737074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.737117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.737156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.737193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.737230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.898 [2024-12-08 06:29:41.737268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.898 [2024-12-08 06:29:41.737305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.898 [2024-12-08 06:29:41.737342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.898 [2024-12-08 06:29:41.737378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.898 [2024-12-08 06:29:41.737415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.737452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.737489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.898 [2024-12-08 06:29:41.737525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.737567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.898 [2024-12-08 06:29:41.737605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.898 [2024-12-08 06:29:41.737642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.737679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.898 [2024-12-08 06:29:41.737741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.898 [2024-12-08 06:29:41.737783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.898 [2024-12-08 06:29:41.737823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.898 [2024-12-08 06:29:41.737862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.898 [2024-12-08 06:29:41.737901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.737946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.737969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.737987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.738009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.738040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.738063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.898 [2024-12-08 06:29:41.738079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.738121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.898 [2024-12-08 06:29:41.738137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.738158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.898 [2024-12-08 06:29:41.738174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:54.898 [2024-12-08 06:29:41.738194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.899 [2024-12-08 06:29:41.738210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.738231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.899 [2024-12-08 06:29:41.738246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.738267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.899 [2024-12-08 06:29:41.738283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.738304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.899 [2024-12-08 06:29:41.738319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.738340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.899 [2024-12-08 06:29:41.738356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.738377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.738393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.738413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.738429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.738450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.738465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.738486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.738502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.738523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.738538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.738563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.899 [2024-12-08 06:29:41.738586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.738608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.899 [2024-12-08 06:29:41.738624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.738645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.899 [2024-12-08 06:29:41.738661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.738682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.899 [2024-12-08 06:29:41.738698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.739610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.899 [2024-12-08 06:29:41.739633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.739658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.739676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.739698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.739714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.739744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.739761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.739781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.739797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.739819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.739835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.740143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.899 [2024-12-08 06:29:41.740165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.740191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.899 [2024-12-08 06:29:41.740209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.740230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.899 [2024-12-08 06:29:41.740250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.740272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.740288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.740310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.740325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.740346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.740362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.740383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.740399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.740420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.740436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.740456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.740472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.740494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.740509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.740837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.740860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.740886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.740904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.740926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.740943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.740965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.740981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.741003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.741027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.741065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.741082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.741103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.899 [2024-12-08 06:29:41.741119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.741139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.899 [2024-12-08 06:29:41.741155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:54.899 [2024-12-08 06:29:41.741176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.899 [2024-12-08 06:29:41.741191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.741213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.741228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.741249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.741265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.741286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.900 [2024-12-08 06:29:41.741301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.741322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.741338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.741359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.741375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.900 [2024-12-08 06:29:41.743063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.900 [2024-12-08 06:29:41.743106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.743144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.743187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.743224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.743260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.900 [2024-12-08 06:29:41.743296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.900 [2024-12-08 06:29:41.743332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.743368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.743405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.743441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.743477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.743513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.743550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.743586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.743627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.743664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.743700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.900 [2024-12-08 06:29:41.743761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.900 [2024-12-08 06:29:41.743802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.900 [2024-12-08 06:29:41.743839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.743877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.743915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.743952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.743973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.900 [2024-12-08 06:29:41.743989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.744011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.900 [2024-12-08 06:29:41.744041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.744063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.900 [2024-12-08 06:29:41.744078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.744100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.900 [2024-12-08 06:29:41.744119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.744141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.744157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.744178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.900 [2024-12-08 06:29:41.744194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.744221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.900 [2024-12-08 06:29:41.744237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.744258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.900 [2024-12-08 06:29:41.744273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.744294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.744310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:54.900 [2024-12-08 06:29:41.744330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.900 [2024-12-08 06:29:41.744346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.744367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.901 [2024-12-08 06:29:41.744382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.744403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.901 [2024-12-08 06:29:41.744419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.745983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.901 [2024-12-08 06:29:41.746008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.746051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.901 [2024-12-08 06:29:41.746070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.746108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.901 [2024-12-08 06:29:41.746124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.746145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.901 [2024-12-08 06:29:41.746166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.746188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.901 [2024-12-08 06:29:41.746204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.746225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.901 [2024-12-08 06:29:41.746241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.746262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.901 [2024-12-08 06:29:41.746277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.746298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.901 [2024-12-08 06:29:41.746313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.746334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.901 [2024-12-08 06:29:41.746350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.747686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.901 [2024-12-08 06:29:41.747710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.747745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.901 [2024-12-08 06:29:41.747764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.747786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.901 [2024-12-08 06:29:41.747803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.747825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.901 [2024-12-08 06:29:41.747841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.747862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.901 [2024-12-08 06:29:41.747878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.747900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.901 [2024-12-08 06:29:41.747916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.747937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.901 [2024-12-08 06:29:41.747953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.747980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.901 [2024-12-08 06:29:41.747997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.748019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.901 [2024-12-08 06:29:41.748034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.748056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.901 [2024-12-08 06:29:41.748072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.748093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.901 [2024-12-08 06:29:41.748124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.748146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.901 [2024-12-08 06:29:41.748162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.748182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.901 [2024-12-08 06:29:41.748198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.748219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.901 [2024-12-08 06:29:41.748234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.748255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.901 [2024-12-08 06:29:41.748270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.748291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.901 [2024-12-08 06:29:41.748308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.748328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.901 [2024-12-08 06:29:41.748344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.748365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.901 [2024-12-08 06:29:41.748380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.748401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.901 [2024-12-08 06:29:41.748416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.748442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.901 [2024-12-08 06:29:41.748458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.748479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.901 [2024-12-08 06:29:41.748494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.748515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.901 [2024-12-08 06:29:41.748531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:54.901 [2024-12-08 06:29:41.748551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.902 [2024-12-08 06:29:41.748567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.748587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.902 [2024-12-08 06:29:41.748603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.748624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.748639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.748660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.748675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.748696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.748712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.748756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.748775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.748797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.902 [2024-12-08 06:29:41.748813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.748834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.902 [2024-12-08 06:29:41.748850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.748871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.902 [2024-12-08 06:29:41.748887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.748909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.748929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.748952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.748968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.748990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.749006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.749027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.902 [2024-12-08 06:29:41.749058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.749080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.902 [2024-12-08 06:29:41.749095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.749116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.902 [2024-12-08 06:29:41.749131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.749152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.902 [2024-12-08 06:29:41.749167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.749188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.749204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.749225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.749241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.749262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.749277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.749298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.902 [2024-12-08 06:29:41.749313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.749334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.749349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.749370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.749389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.749411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.749427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.749448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.902 [2024-12-08 06:29:41.749463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.751908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.902 [2024-12-08 06:29:41.751933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.751961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.902 [2024-12-08 06:29:41.751980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.752004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.752036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.752059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.752092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.752115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.752130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.752152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.752167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.752189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.752204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.752225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.752241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.752262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.752278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.752299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.752314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.752341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.752358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.752379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.752396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.752417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.902 [2024-12-08 06:29:41.752433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.752454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.902 [2024-12-08 06:29:41.752470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.752492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.902 [2024-12-08 06:29:41.752507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:54.902 [2024-12-08 06:29:41.752528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.752544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.752566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.752582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.752603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.903 [2024-12-08 06:29:41.752619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.752640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.903 [2024-12-08 06:29:41.752656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.752678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.903 [2024-12-08 06:29:41.752694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.752741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.752760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.752799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.752816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.752844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.752862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.752885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.752902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.752924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.903 [2024-12-08 06:29:41.752942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.752964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.752980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.753017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.753033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.753786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.903 [2024-12-08 06:29:41.753810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.753837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.753855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.753878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.903 [2024-12-08 06:29:41.753896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.753918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.903 [2024-12-08 06:29:41.753935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.753957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.753974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.753997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.903 [2024-12-08 06:29:41.754014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.754052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.903 [2024-12-08 06:29:41.754069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.754095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.754111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.754133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.754149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.754170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.903 [2024-12-08 06:29:41.754185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.754206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.754222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.754243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.903 [2024-12-08 06:29:41.754258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.754279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.754294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.754315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.754331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.754352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.754367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.754388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.754403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.754424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.754439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.754460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.903 [2024-12-08 06:29:41.754475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.754496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.754512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.754533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.754553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.755141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.755164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.755190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.903 [2024-12-08 06:29:41.755208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.755230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.903 [2024-12-08 06:29:41.755246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.755267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.903 [2024-12-08 06:29:41.755283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.755304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.903 [2024-12-08 06:29:41.755320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.755340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.903 [2024-12-08 06:29:41.755356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:54.903 [2024-12-08 06:29:41.755377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.903 [2024-12-08 06:29:41.755393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.755414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.904 [2024-12-08 06:29:41.755429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.755450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.755466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.755487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.755502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.755523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.755538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.755559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.755579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.755602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.904 [2024-12-08 06:29:41.755618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.755639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.904 [2024-12-08 06:29:41.755655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.755676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.904 [2024-12-08 06:29:41.755691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.755739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.904 [2024-12-08 06:29:41.755758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.755781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.904 [2024-12-08 06:29:41.755798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.755820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.755837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.755859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.904 [2024-12-08 06:29:41.755875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.755898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.904 [2024-12-08 06:29:41.755914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.755936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.904 [2024-12-08 06:29:41.755953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.755975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.904 [2024-12-08 06:29:41.755991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.756029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.904 [2024-12-08 06:29:41.756045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.756081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.756097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.756124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.756140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.756161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.904 [2024-12-08 06:29:41.756177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.756197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.904 [2024-12-08 06:29:41.756212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.756233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.756249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.756269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.756284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.756306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.756322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.756887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.756911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.756937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.756955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.756978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.756995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.757031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.757047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.757068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.757096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.757133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.757149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.757178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.757194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.757215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.757230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.757251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.904 [2024-12-08 06:29:41.757267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.757287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.904 [2024-12-08 06:29:41.757302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.757323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.757338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.757359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.904 [2024-12-08 06:29:41.757374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.757394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.904 [2024-12-08 06:29:41.757409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.757430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.757446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.757466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.904 [2024-12-08 06:29:41.757481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:54.904 [2024-12-08 06:29:41.757502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.904 [2024-12-08 06:29:41.757518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.757539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.905 [2024-12-08 06:29:41.757554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.759303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.905 [2024-12-08 06:29:41.759327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.759355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.905 [2024-12-08 06:29:41.759378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.759402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.905 [2024-12-08 06:29:41.759418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.759440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.905 [2024-12-08 06:29:41.759464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.759503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.905 [2024-12-08 06:29:41.759519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.759540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.905 [2024-12-08 06:29:41.759556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.759592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.905 [2024-12-08 06:29:41.759610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.759633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.905 [2024-12-08 06:29:41.759649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.759671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.905 [2024-12-08 06:29:41.759687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.759732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.905 [2024-12-08 06:29:41.759762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.759786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.905 [2024-12-08 06:29:41.759803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.759825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.905 [2024-12-08 06:29:41.759842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.759865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.905 [2024-12-08 06:29:41.759882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.759905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.905 [2024-12-08 06:29:41.759926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.759950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.905 [2024-12-08 06:29:41.759967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.759990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.905 [2024-12-08 06:29:41.760021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.760044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.905 [2024-12-08 06:29:41.760061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.760083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.905 [2024-12-08 06:29:41.760099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.760121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.905 [2024-12-08 06:29:41.760137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.760159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.905 [2024-12-08 06:29:41.760180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.760203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.905 [2024-12-08 06:29:41.760219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.760240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.905 [2024-12-08 06:29:41.760257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.760279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.905 [2024-12-08 06:29:41.760295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.760330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.905 [2024-12-08 06:29:41.760347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.760368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.905 [2024-12-08 06:29:41.760398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.760420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.905 [2024-12-08 06:29:41.760435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.760460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.905 [2024-12-08 06:29:41.760476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.760497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.905 [2024-12-08 06:29:41.760513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.762961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.905 [2024-12-08 06:29:41.762986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.763030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.905 [2024-12-08 06:29:41.763048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.763071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.905 [2024-12-08 06:29:41.763102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.763124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.905 [2024-12-08 06:29:41.763139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.763175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.905 [2024-12-08 06:29:41.763192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:54.905 [2024-12-08 06:29:41.763228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.906 [2024-12-08 06:29:41.763245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.763267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.906 [2024-12-08 06:29:41.763283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.763305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.906 [2024-12-08 06:29:41.763323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.763346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.906 [2024-12-08 06:29:41.763362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.763384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.906 [2024-12-08 06:29:41.763400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.763428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.906 [2024-12-08 06:29:41.763445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.763468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.763484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.763506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.763523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.763545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.763578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.763601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.763619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.763642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.763661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.763700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.763717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.763768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.763786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.763810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.763828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.763851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.906 [2024-12-08 06:29:41.763869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.763892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.906 [2024-12-08 06:29:41.763910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.763933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.763950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.763973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.763996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.764021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.764053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.764077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.764093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.764116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.906 [2024-12-08 06:29:41.764132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.764155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.764171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.764209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.906 [2024-12-08 06:29:41.764225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.764262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.764278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.764299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.906 [2024-12-08 06:29:41.764315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.764335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.906 [2024-12-08 06:29:41.764351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.764372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.764388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.764409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.764425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.764446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.764461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.764482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.764501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.764522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.906 [2024-12-08 06:29:41.764554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.764576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.906 [2024-12-08 06:29:41.764608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.764631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.764648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.764670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.764687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.764734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.906 [2024-12-08 06:29:41.764755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.766050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.906 [2024-12-08 06:29:41.766091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.766120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.906 [2024-12-08 06:29:41.766153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.766176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.906 [2024-12-08 06:29:41.766192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.766213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.906 [2024-12-08 06:29:41.766230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:54.906 [2024-12-08 06:29:41.766252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.766268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.766289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.766305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.766327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.766343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.766370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.766387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.766408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.766424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.766445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.907 [2024-12-08 06:29:41.766460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.766481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.907 [2024-12-08 06:29:41.766497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.766518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.907 [2024-12-08 06:29:41.766533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.766553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.907 [2024-12-08 06:29:41.766569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.766589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.907 [2024-12-08 06:29:41.766604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.766626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.907 [2024-12-08 06:29:41.766641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.766662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.766678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.766699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.766740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.766766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.766784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.766807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.907 [2024-12-08 06:29:41.766825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.766853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.766871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.766894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.766911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.766934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.766952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.766975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.766992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.767571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.767594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.767619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.907 [2024-12-08 06:29:41.767636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.767658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.907 [2024-12-08 06:29:41.767674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.767694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.907 [2024-12-08 06:29:41.767736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.767763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.907 [2024-12-08 06:29:41.767784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.767807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.767824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.767848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.907 [2024-12-08 06:29:41.767865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.767888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.907 [2024-12-08 06:29:41.767906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.767929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.767952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.767977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.767995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.768033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.768048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.768069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.907 [2024-12-08 06:29:41.768085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.768105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.907 [2024-12-08 06:29:41.768121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.768141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.768157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.768178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.907 [2024-12-08 06:29:41.768193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.768215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.907 [2024-12-08 06:29:41.768230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.769145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.769168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.769194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.769212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.769234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.769249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.769270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.907 [2024-12-08 06:29:41.769286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:54.907 [2024-12-08 06:29:41.769307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.908 [2024-12-08 06:29:41.769327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.769349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.769365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.769386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.769402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.769424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.769439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.769460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.769476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.769497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.769514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.769535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.769550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.769572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.908 [2024-12-08 06:29:41.769587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.769609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.908 [2024-12-08 06:29:41.769624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.769645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.908 [2024-12-08 06:29:41.769660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.769681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.908 [2024-12-08 06:29:41.769697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.769746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.769765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.769787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.769804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.769832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.769851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.769874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.908 [2024-12-08 06:29:41.769891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.769914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.908 [2024-12-08 06:29:41.769931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.769955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.908 [2024-12-08 06:29:41.769972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.769995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.908 [2024-12-08 06:29:41.770031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.770055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.770087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.770109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.770125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.770146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.770163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.770184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.770200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.770221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.770237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.770258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.770274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.770295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.908 [2024-12-08 06:29:41.770311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.770336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.770352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.770373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.908 [2024-12-08 06:29:41.770389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.770410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.770425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.770446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.908 [2024-12-08 06:29:41.770461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.770483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.770499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.772521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.772543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.772569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.908 [2024-12-08 06:29:41.772587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.772609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.908 [2024-12-08 06:29:41.772625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.772646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.908 [2024-12-08 06:29:41.772661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.772683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.908 [2024-12-08 06:29:41.772699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.772745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.772763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.772802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.772820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.772842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.772864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:54.908 [2024-12-08 06:29:41.772887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.908 [2024-12-08 06:29:41.772904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.772926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.772943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.772965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.772982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.773018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.773035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.773057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.773088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.773110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.773126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.773147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.773163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.773184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.773199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.773220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.773236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.773257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.773273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.773294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.773310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.773330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.773350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.773373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.773389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.773410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.773425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.773446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.773463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.773484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.773500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.773521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.773536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.773558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.773575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.773596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.773612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.774595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.774618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.774644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.774661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.774682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.774714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.774751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.774787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.774811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.774828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.774858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.774876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.774899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.774916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.774938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.774955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.774977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.774994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.775032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.775048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.775085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.775100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.775122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.775138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.775159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.775175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.775197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.775212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.775234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.775250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.775271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.775286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.775307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.775323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.775348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.775365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.775386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.909 [2024-12-08 06:29:41.775402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.775423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.775439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:54.909 [2024-12-08 06:29:41.775460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-12-08 06:29:41.775475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.775497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.910 [2024-12-08 06:29:41.775513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.776339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.910 [2024-12-08 06:29:41.776361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.776386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.910 [2024-12-08 06:29:41.776404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.776426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.910 [2024-12-08 06:29:41.776442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.776462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.910 [2024-12-08 06:29:41.776478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.776498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.910 [2024-12-08 06:29:41.776514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.776535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.910 [2024-12-08 06:29:41.776551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.776572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.910 [2024-12-08 06:29:41.776588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.776609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.910 [2024-12-08 06:29:41.776629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.776651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.910 [2024-12-08 06:29:41.776667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.776688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.910 [2024-12-08 06:29:41.776718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.776753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.910 [2024-12-08 06:29:41.776770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.776792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.910 [2024-12-08 06:29:41.776809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.910 [2024-12-08 06:29:41.777257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.910 [2024-12-08 06:29:41.777299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.910 [2024-12-08 06:29:41.777336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.910 [2024-12-08 06:29:41.777373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.910 [2024-12-08 06:29:41.777409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.910 [2024-12-08 06:29:41.777445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.910 [2024-12-08 06:29:41.777482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.910 [2024-12-08 06:29:41.777523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.910 [2024-12-08 06:29:41.777561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.910 [2024-12-08 06:29:41.777598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.910 [2024-12-08 06:29:41.777634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.910 [2024-12-08 06:29:41.777671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.910 [2024-12-08 06:29:41.777733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.910 [2024-12-08 06:29:41.777778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.910 [2024-12-08 06:29:41.777818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.910 [2024-12-08 06:29:41.777859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.910 [2024-12-08 06:29:41.777899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.910 [2024-12-08 06:29:41.777938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.777961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.910 [2024-12-08 06:29:41.777978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.778001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.910 [2024-12-08 06:29:41.778037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.778688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.910 [2024-12-08 06:29:41.778732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:54.910 [2024-12-08 06:29:41.778790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.911 [2024-12-08 06:29:41.778811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:54.911 [2024-12-08 06:29:41.778834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.911 [2024-12-08 06:29:41.778851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:54.911 [2024-12-08 06:29:41.778872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.911 [2024-12-08 06:29:41.778888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:54.911 [2024-12-08 06:29:41.778910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.911 [2024-12-08 06:29:41.778926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:54.911 [2024-12-08 06:29:41.778947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.911 [2024-12-08 06:29:41.778964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:54.911 [2024-12-08 06:29:41.778986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.911 [2024-12-08 06:29:41.779002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:54.911 [2024-12-08 06:29:41.779025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.911 [2024-12-08 06:29:41.779056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:54.911 [2024-12-08 06:29:41.779077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.911 [2024-12-08 06:29:41.779093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:54.911 [2024-12-08 06:29:41.779113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.934 [2024-12-08 06:29:41.779129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:54.934 [2024-12-08 06:29:41.779150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.934 [2024-12-08 06:29:41.779166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:54.934 [2024-12-08 06:29:41.779187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.934 [2024-12-08 06:29:41.779202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:54.934 [2024-12-08 06:29:41.779228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.934 [2024-12-08 06:29:41.779245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:54.934 [2024-12-08 06:29:41.779265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.934 [2024-12-08 06:29:41.779281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:54.934 [2024-12-08 06:29:41.779301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.934 [2024-12-08 06:29:41.779317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:54.934 [2024-12-08 06:29:41.779338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.934 [2024-12-08 06:29:41.779355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:54.934 8391.56 IOPS, 32.78 MiB/s [2024-12-08T05:29:45.053Z] 8408.37 IOPS, 32.85 MiB/s [2024-12-08T05:29:45.053Z] 8419.11 IOPS, 32.89 MiB/s [2024-12-08T05:29:45.053Z] Received shutdown signal, test time was about 36.742290 seconds 00:25:54.934 00:25:54.934 Latency(us) 00:25:54.934 [2024-12-08T05:29:45.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.934 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:54.934 Verification LBA range: start 0x0 length 0x4000 00:25:54.934 Nvme0n1 : 36.74 8424.84 32.91 0.00 0.00 15169.83 257.90 4026531.84 00:25:54.934 [2024-12-08T05:29:45.053Z] =================================================================================================================== 00:25:54.934 [2024-12-08T05:29:45.053Z] Total : 8424.84 32.91 0.00 0.00 15169.83 257.90 4026531.84 00:25:54.934 06:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:55.191 rmmod nvme_tcp 00:25:55.191 rmmod nvme_fabrics 00:25:55.191 rmmod nvme_keyring 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1145196 ']' 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1145196 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1145196 ']' 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1145196 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1145196 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1145196' 00:25:55.191 killing process with pid 1145196 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1145196 00:25:55.191 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1145196 00:25:55.448 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:55.448 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:55.448 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:55.448 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:55.448 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:55.448 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:55.448 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:55.448 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:55.448 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:55.448 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.448 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:55.448 06:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.982 06:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:57.982 00:25:57.982 real 0m45.777s 00:25:57.982 user 2m19.888s 00:25:57.982 sys 0m12.369s 00:25:57.982 06:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:57.982 06:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:57.982 ************************************ 00:25:57.982 END TEST nvmf_host_multipath_status 00:25:57.982 ************************************ 00:25:57.982 06:29:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:57.982 06:29:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:57.982 06:29:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:57.982 06:29:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.982 ************************************ 00:25:57.982 START TEST nvmf_discovery_remove_ifc 00:25:57.982 ************************************ 00:25:57.982 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:57.982 * Looking for test storage... 00:25:57.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:57.982 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:57.982 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:25:57.982 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:57.982 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:57.982 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:57.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.983 --rc genhtml_branch_coverage=1 00:25:57.983 --rc genhtml_function_coverage=1 00:25:57.983 --rc genhtml_legend=1 00:25:57.983 --rc geninfo_all_blocks=1 00:25:57.983 --rc geninfo_unexecuted_blocks=1 00:25:57.983 00:25:57.983 ' 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:57.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.983 --rc genhtml_branch_coverage=1 00:25:57.983 --rc genhtml_function_coverage=1 00:25:57.983 --rc genhtml_legend=1 00:25:57.983 --rc geninfo_all_blocks=1 00:25:57.983 --rc geninfo_unexecuted_blocks=1 00:25:57.983 00:25:57.983 ' 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:57.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.983 --rc genhtml_branch_coverage=1 00:25:57.983 --rc genhtml_function_coverage=1 00:25:57.983 --rc genhtml_legend=1 00:25:57.983 --rc geninfo_all_blocks=1 00:25:57.983 --rc geninfo_unexecuted_blocks=1 00:25:57.983 00:25:57.983 ' 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:57.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.983 --rc genhtml_branch_coverage=1 00:25:57.983 --rc genhtml_function_coverage=1 00:25:57.983 --rc genhtml_legend=1 00:25:57.983 --rc geninfo_all_blocks=1 00:25:57.983 --rc geninfo_unexecuted_blocks=1 00:25:57.983 00:25:57.983 ' 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:57.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:57.983 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:57.984 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:57.984 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:57.984 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.984 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:57.984 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:57.984 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:57.984 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.984 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.984 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.984 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:57.984 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:57.984 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:57.984 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.879 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.879 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:59.879 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:59.879 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:59.879 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:59.879 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:59.879 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:59.879 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:59.879 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:59.879 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:59.879 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:59.879 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:59.879 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:59.879 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:59.879 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:59.879 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:59.880 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:59.880 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:59.880 Found net devices under 0000:84:00.0: cvl_0_0 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:59.880 Found net devices under 0000:84:00.1: cvl_0_1 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:59.880 06:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:00.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:26:00.139 00:26:00.139 --- 10.0.0.2 ping statistics --- 00:26:00.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.139 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:00.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:26:00.139 00:26:00.139 --- 10.0.0.1 ping statistics --- 00:26:00.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.139 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1152111 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1152111 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1152111 ']' 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:00.139 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.139 [2024-12-08 06:29:50.118603] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:26:00.139 [2024-12-08 06:29:50.118679] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.139 [2024-12-08 06:29:50.197227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.139 [2024-12-08 06:29:50.257000] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.139 [2024-12-08 06:29:50.257086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.139 [2024-12-08 06:29:50.257101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.139 [2024-12-08 06:29:50.257112] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.139 [2024-12-08 06:29:50.257122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.139 [2024-12-08 06:29:50.257812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.398 [2024-12-08 06:29:50.414805] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.398 [2024-12-08 06:29:50.423044] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:00.398 null0 00:26:00.398 [2024-12-08 06:29:50.454906] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1152258 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1152258 /tmp/host.sock 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1152258 ']' 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:00.398 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:00.398 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.657 [2024-12-08 06:29:50.525107] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:26:00.657 [2024-12-08 06:29:50.525190] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1152258 ] 00:26:00.657 [2024-12-08 06:29:50.591774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.657 [2024-12-08 06:29:50.649798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.657 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:00.657 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:00.657 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:00.657 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:00.657 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.657 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.657 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.657 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:00.657 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.657 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.916 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.916 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:00.916 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.916 06:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.946 [2024-12-08 06:29:51.882525] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:01.946 [2024-12-08 06:29:51.882559] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:01.946 [2024-12-08 06:29:51.882583] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:01.946 [2024-12-08 06:29:52.010987] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:02.204 [2024-12-08 06:29:52.070805] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:02.204 [2024-12-08 06:29:52.071932] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x15380d0:1 started. 00:26:02.204 [2024-12-08 06:29:52.073670] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:02.204 [2024-12-08 06:29:52.073758] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:02.204 [2024-12-08 06:29:52.073810] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:02.204 [2024-12-08 06:29:52.073833] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:02.204 [2024-12-08 06:29:52.073877] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:02.204 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.204 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:02.204 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:02.204 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.204 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:02.204 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.204 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:02.204 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:02.204 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:02.204 [2024-12-08 06:29:52.081013] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x15380d0 was disconnected and freed. delete nvme_qpair. 00:26:02.204 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.205 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:02.205 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:02.205 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:02.205 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:02.205 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:02.205 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.205 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:02.205 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.205 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:02.205 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:02.205 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:02.205 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.205 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:02.205 06:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:03.137 06:29:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:03.137 06:29:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.137 06:29:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:03.137 06:29:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.137 06:29:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:03.137 06:29:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.137 06:29:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:03.137 06:29:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.137 06:29:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:03.137 06:29:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:04.511 06:29:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:04.511 06:29:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.511 06:29:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:04.511 06:29:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.511 06:29:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:04.511 06:29:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:04.511 06:29:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:04.511 06:29:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.511 06:29:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:04.511 06:29:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:05.443 06:29:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:05.443 06:29:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:05.443 06:29:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:05.443 06:29:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.443 06:29:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:05.443 06:29:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:05.443 06:29:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:05.443 06:29:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.443 06:29:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:05.443 06:29:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:06.372 06:29:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:06.372 06:29:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.372 06:29:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:06.372 06:29:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.372 06:29:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:06.372 06:29:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.372 06:29:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:06.372 06:29:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.372 06:29:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:06.372 06:29:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:07.303 06:29:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:07.303 06:29:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:07.303 06:29:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.303 06:29:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:07.303 06:29:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:07.303 06:29:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.303 06:29:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.303 06:29:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.561 06:29:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:07.561 06:29:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:07.561 [2024-12-08 06:29:57.514910] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:07.561 [2024-12-08 06:29:57.514994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.561 [2024-12-08 06:29:57.515017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.561 [2024-12-08 06:29:57.515036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.561 [2024-12-08 06:29:57.515048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.561 [2024-12-08 06:29:57.515062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.561 [2024-12-08 06:29:57.515074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.561 [2024-12-08 06:29:57.515087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.561 [2024-12-08 06:29:57.515100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.561 [2024-12-08 06:29:57.515112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.561 [2024-12-08 06:29:57.515124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.561 [2024-12-08 06:29:57.515137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15149e0 is same with the state(6) to be set 00:26:07.561 [2024-12-08 06:29:57.524928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15149e0 (9): Bad file descriptor 00:26:07.561 [2024-12-08 06:29:57.534970] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:07.561 [2024-12-08 06:29:57.534992] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:07.561 [2024-12-08 06:29:57.535006] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:07.561 [2024-12-08 06:29:57.535031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:07.561 [2024-12-08 06:29:57.535093] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:08.493 06:29:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:08.493 06:29:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.493 06:29:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:08.493 06:29:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.493 06:29:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:08.493 06:29:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:08.493 06:29:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:08.493 [2024-12-08 06:29:58.556769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:08.493 [2024-12-08 06:29:58.556861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15149e0 with addr=10.0.0.2, port=4420 00:26:08.493 [2024-12-08 06:29:58.556892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15149e0 is same with the state(6) to be set 00:26:08.493 [2024-12-08 06:29:58.556949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15149e0 (9): Bad file descriptor 00:26:08.493 [2024-12-08 06:29:58.557425] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:08.493 [2024-12-08 06:29:58.557475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:08.493 [2024-12-08 06:29:58.557493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:08.493 [2024-12-08 06:29:58.557510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:08.493 [2024-12-08 06:29:58.557524] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:08.493 [2024-12-08 06:29:58.557536] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:08.493 [2024-12-08 06:29:58.557544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:08.493 [2024-12-08 06:29:58.557559] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:08.493 [2024-12-08 06:29:58.557568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:08.493 06:29:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.493 06:29:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:08.493 06:29:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:09.863 [2024-12-08 06:29:59.560068] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:09.863 [2024-12-08 06:29:59.560105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:09.863 [2024-12-08 06:29:59.560126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:09.863 [2024-12-08 06:29:59.560154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:09.863 [2024-12-08 06:29:59.560169] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:09.863 [2024-12-08 06:29:59.560182] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:09.863 [2024-12-08 06:29:59.560192] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:09.863 [2024-12-08 06:29:59.560199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:09.863 [2024-12-08 06:29:59.560254] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:09.863 [2024-12-08 06:29:59.560298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.864 [2024-12-08 06:29:59.560321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.864 [2024-12-08 06:29:59.560342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.864 [2024-12-08 06:29:59.560364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.864 [2024-12-08 06:29:59.560377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.864 [2024-12-08 06:29:59.560389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.864 [2024-12-08 06:29:59.560403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.864 [2024-12-08 06:29:59.560415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.864 [2024-12-08 06:29:59.560435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.864 [2024-12-08 06:29:59.560450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.864 [2024-12-08 06:29:59.560463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:09.864 [2024-12-08 06:29:59.560517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1503d20 (9): Bad file descriptor 00:26:09.864 [2024-12-08 06:29:59.561508] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:09.864 [2024-12-08 06:29:59.561532] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:09.864 06:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:10.794 06:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:10.794 06:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:10.794 06:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:10.794 06:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.794 06:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:10.794 06:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.794 06:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:10.794 06:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.794 06:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:10.794 06:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:11.723 [2024-12-08 06:30:01.617837] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:11.723 [2024-12-08 06:30:01.617868] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:11.723 [2024-12-08 06:30:01.617893] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:11.723 [2024-12-08 06:30:01.704200] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:11.723 06:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:11.723 06:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:11.723 06:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.723 06:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:11.723 06:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.723 06:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:11.723 06:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:11.723 [2024-12-08 06:30:01.758998] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:11.723 [2024-12-08 06:30:01.759913] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x14edc50:1 started. 00:26:11.723 [2024-12-08 06:30:01.761372] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:11.723 [2024-12-08 06:30:01.761417] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:11.723 [2024-12-08 06:30:01.761450] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:11.723 [2024-12-08 06:30:01.761472] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:11.723 [2024-12-08 06:30:01.761484] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:11.723 06:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.723 [2024-12-08 06:30:01.765977] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x14edc50 was disconnected and freed. delete nvme_qpair. 00:26:11.723 06:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:11.723 06:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1152258 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1152258 ']' 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1152258 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1152258 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1152258' 00:26:13.113 killing process with pid 1152258 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1152258 00:26:13.113 06:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1152258 00:26:13.113 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:13.113 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:13.113 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:13.113 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:13.113 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:13.113 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:13.113 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:13.113 rmmod nvme_tcp 00:26:13.113 rmmod nvme_fabrics 00:26:13.113 rmmod nvme_keyring 00:26:13.114 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:13.114 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:13.114 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:13.114 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1152111 ']' 00:26:13.114 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1152111 00:26:13.114 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1152111 ']' 00:26:13.114 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1152111 00:26:13.114 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:13.114 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:13.114 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1152111 00:26:13.114 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:13.114 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:13.114 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1152111' 00:26:13.114 killing process with pid 1152111 00:26:13.114 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1152111 00:26:13.114 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1152111 00:26:13.372 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:13.372 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:13.372 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:13.372 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:13.372 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:13.372 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:13.372 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:13.372 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:13.372 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:13.372 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.372 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.372 06:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:15.909 00:26:15.909 real 0m17.894s 00:26:15.909 user 0m25.769s 00:26:15.909 sys 0m3.116s 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:15.909 ************************************ 00:26:15.909 END TEST nvmf_discovery_remove_ifc 00:26:15.909 ************************************ 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.909 ************************************ 00:26:15.909 START TEST nvmf_identify_kernel_target 00:26:15.909 ************************************ 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:15.909 * Looking for test storage... 00:26:15.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:15.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.909 --rc genhtml_branch_coverage=1 00:26:15.909 --rc genhtml_function_coverage=1 00:26:15.909 --rc genhtml_legend=1 00:26:15.909 --rc geninfo_all_blocks=1 00:26:15.909 --rc geninfo_unexecuted_blocks=1 00:26:15.909 00:26:15.909 ' 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:15.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.909 --rc genhtml_branch_coverage=1 00:26:15.909 --rc genhtml_function_coverage=1 00:26:15.909 --rc genhtml_legend=1 00:26:15.909 --rc geninfo_all_blocks=1 00:26:15.909 --rc geninfo_unexecuted_blocks=1 00:26:15.909 00:26:15.909 ' 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:15.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.909 --rc genhtml_branch_coverage=1 00:26:15.909 --rc genhtml_function_coverage=1 00:26:15.909 --rc genhtml_legend=1 00:26:15.909 --rc geninfo_all_blocks=1 00:26:15.909 --rc geninfo_unexecuted_blocks=1 00:26:15.909 00:26:15.909 ' 00:26:15.909 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:15.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.910 --rc genhtml_branch_coverage=1 00:26:15.910 --rc genhtml_function_coverage=1 00:26:15.910 --rc genhtml_legend=1 00:26:15.910 --rc geninfo_all_blocks=1 00:26:15.910 --rc geninfo_unexecuted_blocks=1 00:26:15.910 00:26:15.910 ' 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:15.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:15.910 06:30:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:17.812 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:17.812 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:17.813 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:17.813 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:17.813 Found net devices under 0000:84:00.0: cvl_0_0 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:17.813 Found net devices under 0000:84:00.1: cvl_0_1 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:17.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:26:17.813 00:26:17.813 --- 10.0.0.2 ping statistics --- 00:26:17.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.813 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:26:17.813 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:17.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:26:17.813 00:26:17.813 --- 10.0.0.1 ping statistics --- 00:26:17.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.814 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.814 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.073 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:18.073 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:18.073 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:18.073 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:18.073 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:18.073 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:18.073 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:18.073 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:18.073 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:18.073 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:18.073 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:18.073 06:30:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:19.010 Waiting for block devices as requested 00:26:19.268 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:26:19.268 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:19.527 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:19.527 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:19.527 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:19.527 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:19.787 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:19.787 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:19.787 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:19.787 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:20.048 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:20.048 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:20.048 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:20.308 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:20.308 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:20.308 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:20.308 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:20.567 No valid GPT data, bailing 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:20.567 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:20.568 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:26:20.829 00:26:20.829 Discovery Log Number of Records 2, Generation counter 2 00:26:20.829 =====Discovery Log Entry 0====== 00:26:20.829 trtype: tcp 00:26:20.829 adrfam: ipv4 00:26:20.829 subtype: current discovery subsystem 00:26:20.829 treq: not specified, sq flow control disable supported 00:26:20.829 portid: 1 00:26:20.829 trsvcid: 4420 00:26:20.829 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:20.829 traddr: 10.0.0.1 00:26:20.829 eflags: none 00:26:20.829 sectype: none 00:26:20.829 =====Discovery Log Entry 1====== 00:26:20.829 trtype: tcp 00:26:20.829 adrfam: ipv4 00:26:20.829 subtype: nvme subsystem 00:26:20.829 treq: not specified, sq flow control disable supported 00:26:20.829 portid: 1 00:26:20.829 trsvcid: 4420 00:26:20.829 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:20.829 traddr: 10.0.0.1 00:26:20.829 eflags: none 00:26:20.829 sectype: none 00:26:20.829 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:20.829 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:20.829 ===================================================== 00:26:20.829 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:20.829 ===================================================== 00:26:20.829 Controller Capabilities/Features 00:26:20.829 ================================ 00:26:20.829 Vendor ID: 0000 00:26:20.829 Subsystem Vendor ID: 0000 00:26:20.829 Serial Number: ae43db092ba8556922e5 00:26:20.829 Model Number: Linux 00:26:20.829 Firmware Version: 6.8.9-20 00:26:20.829 Recommended Arb Burst: 0 00:26:20.829 IEEE OUI Identifier: 00 00 00 00:26:20.829 Multi-path I/O 00:26:20.829 May have multiple subsystem ports: No 00:26:20.829 May have multiple controllers: No 00:26:20.829 Associated with SR-IOV VF: No 00:26:20.829 Max Data Transfer Size: Unlimited 00:26:20.829 Max Number of Namespaces: 0 00:26:20.829 Max Number of I/O Queues: 1024 00:26:20.829 NVMe Specification Version (VS): 1.3 00:26:20.829 NVMe Specification Version (Identify): 1.3 00:26:20.829 Maximum Queue Entries: 1024 00:26:20.829 Contiguous Queues Required: No 00:26:20.829 Arbitration Mechanisms Supported 00:26:20.829 Weighted Round Robin: Not Supported 00:26:20.829 Vendor Specific: Not Supported 00:26:20.829 Reset Timeout: 7500 ms 00:26:20.829 Doorbell Stride: 4 bytes 00:26:20.829 NVM Subsystem Reset: Not Supported 00:26:20.829 Command Sets Supported 00:26:20.829 NVM Command Set: Supported 00:26:20.829 Boot Partition: Not Supported 00:26:20.829 Memory Page Size Minimum: 4096 bytes 00:26:20.829 Memory Page Size Maximum: 4096 bytes 00:26:20.829 Persistent Memory Region: Not Supported 00:26:20.829 Optional Asynchronous Events Supported 00:26:20.829 Namespace Attribute Notices: Not Supported 00:26:20.829 Firmware Activation Notices: Not Supported 00:26:20.829 ANA Change Notices: Not Supported 00:26:20.829 PLE Aggregate Log Change Notices: Not Supported 00:26:20.829 LBA Status Info Alert Notices: Not Supported 00:26:20.829 EGE Aggregate Log Change Notices: Not Supported 00:26:20.829 Normal NVM Subsystem Shutdown event: Not Supported 00:26:20.829 Zone Descriptor Change Notices: Not Supported 00:26:20.829 Discovery Log Change Notices: Supported 00:26:20.829 Controller Attributes 00:26:20.829 128-bit Host Identifier: Not Supported 00:26:20.829 Non-Operational Permissive Mode: Not Supported 00:26:20.829 NVM Sets: Not Supported 00:26:20.829 Read Recovery Levels: Not Supported 00:26:20.829 Endurance Groups: Not Supported 00:26:20.829 Predictable Latency Mode: Not Supported 00:26:20.829 Traffic Based Keep ALive: Not Supported 00:26:20.829 Namespace Granularity: Not Supported 00:26:20.829 SQ Associations: Not Supported 00:26:20.829 UUID List: Not Supported 00:26:20.829 Multi-Domain Subsystem: Not Supported 00:26:20.829 Fixed Capacity Management: Not Supported 00:26:20.829 Variable Capacity Management: Not Supported 00:26:20.829 Delete Endurance Group: Not Supported 00:26:20.829 Delete NVM Set: Not Supported 00:26:20.829 Extended LBA Formats Supported: Not Supported 00:26:20.829 Flexible Data Placement Supported: Not Supported 00:26:20.829 00:26:20.829 Controller Memory Buffer Support 00:26:20.829 ================================ 00:26:20.829 Supported: No 00:26:20.829 00:26:20.829 Persistent Memory Region Support 00:26:20.829 ================================ 00:26:20.829 Supported: No 00:26:20.829 00:26:20.829 Admin Command Set Attributes 00:26:20.829 ============================ 00:26:20.829 Security Send/Receive: Not Supported 00:26:20.829 Format NVM: Not Supported 00:26:20.829 Firmware Activate/Download: Not Supported 00:26:20.829 Namespace Management: Not Supported 00:26:20.829 Device Self-Test: Not Supported 00:26:20.829 Directives: Not Supported 00:26:20.829 NVMe-MI: Not Supported 00:26:20.829 Virtualization Management: Not Supported 00:26:20.829 Doorbell Buffer Config: Not Supported 00:26:20.829 Get LBA Status Capability: Not Supported 00:26:20.829 Command & Feature Lockdown Capability: Not Supported 00:26:20.829 Abort Command Limit: 1 00:26:20.829 Async Event Request Limit: 1 00:26:20.829 Number of Firmware Slots: N/A 00:26:20.829 Firmware Slot 1 Read-Only: N/A 00:26:20.829 Firmware Activation Without Reset: N/A 00:26:20.829 Multiple Update Detection Support: N/A 00:26:20.829 Firmware Update Granularity: No Information Provided 00:26:20.829 Per-Namespace SMART Log: No 00:26:20.829 Asymmetric Namespace Access Log Page: Not Supported 00:26:20.829 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:20.829 Command Effects Log Page: Not Supported 00:26:20.829 Get Log Page Extended Data: Supported 00:26:20.829 Telemetry Log Pages: Not Supported 00:26:20.829 Persistent Event Log Pages: Not Supported 00:26:20.829 Supported Log Pages Log Page: May Support 00:26:20.829 Commands Supported & Effects Log Page: Not Supported 00:26:20.829 Feature Identifiers & Effects Log Page:May Support 00:26:20.829 NVMe-MI Commands & Effects Log Page: May Support 00:26:20.829 Data Area 4 for Telemetry Log: Not Supported 00:26:20.829 Error Log Page Entries Supported: 1 00:26:20.830 Keep Alive: Not Supported 00:26:20.830 00:26:20.830 NVM Command Set Attributes 00:26:20.830 ========================== 00:26:20.830 Submission Queue Entry Size 00:26:20.830 Max: 1 00:26:20.830 Min: 1 00:26:20.830 Completion Queue Entry Size 00:26:20.830 Max: 1 00:26:20.830 Min: 1 00:26:20.830 Number of Namespaces: 0 00:26:20.830 Compare Command: Not Supported 00:26:20.830 Write Uncorrectable Command: Not Supported 00:26:20.830 Dataset Management Command: Not Supported 00:26:20.830 Write Zeroes Command: Not Supported 00:26:20.830 Set Features Save Field: Not Supported 00:26:20.830 Reservations: Not Supported 00:26:20.830 Timestamp: Not Supported 00:26:20.830 Copy: Not Supported 00:26:20.830 Volatile Write Cache: Not Present 00:26:20.830 Atomic Write Unit (Normal): 1 00:26:20.830 Atomic Write Unit (PFail): 1 00:26:20.830 Atomic Compare & Write Unit: 1 00:26:20.830 Fused Compare & Write: Not Supported 00:26:20.830 Scatter-Gather List 00:26:20.830 SGL Command Set: Supported 00:26:20.830 SGL Keyed: Not Supported 00:26:20.830 SGL Bit Bucket Descriptor: Not Supported 00:26:20.830 SGL Metadata Pointer: Not Supported 00:26:20.830 Oversized SGL: Not Supported 00:26:20.830 SGL Metadata Address: Not Supported 00:26:20.830 SGL Offset: Supported 00:26:20.830 Transport SGL Data Block: Not Supported 00:26:20.830 Replay Protected Memory Block: Not Supported 00:26:20.830 00:26:20.830 Firmware Slot Information 00:26:20.830 ========================= 00:26:20.830 Active slot: 0 00:26:20.830 00:26:20.830 00:26:20.830 Error Log 00:26:20.830 ========= 00:26:20.830 00:26:20.830 Active Namespaces 00:26:20.830 ================= 00:26:20.830 Discovery Log Page 00:26:20.830 ================== 00:26:20.830 Generation Counter: 2 00:26:20.830 Number of Records: 2 00:26:20.830 Record Format: 0 00:26:20.830 00:26:20.830 Discovery Log Entry 0 00:26:20.830 ---------------------- 00:26:20.830 Transport Type: 3 (TCP) 00:26:20.830 Address Family: 1 (IPv4) 00:26:20.830 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:20.830 Entry Flags: 00:26:20.830 Duplicate Returned Information: 0 00:26:20.830 Explicit Persistent Connection Support for Discovery: 0 00:26:20.830 Transport Requirements: 00:26:20.830 Secure Channel: Not Specified 00:26:20.830 Port ID: 1 (0x0001) 00:26:20.830 Controller ID: 65535 (0xffff) 00:26:20.830 Admin Max SQ Size: 32 00:26:20.830 Transport Service Identifier: 4420 00:26:20.830 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:20.830 Transport Address: 10.0.0.1 00:26:20.830 Discovery Log Entry 1 00:26:20.830 ---------------------- 00:26:20.830 Transport Type: 3 (TCP) 00:26:20.830 Address Family: 1 (IPv4) 00:26:20.830 Subsystem Type: 2 (NVM Subsystem) 00:26:20.830 Entry Flags: 00:26:20.830 Duplicate Returned Information: 0 00:26:20.830 Explicit Persistent Connection Support for Discovery: 0 00:26:20.830 Transport Requirements: 00:26:20.830 Secure Channel: Not Specified 00:26:20.830 Port ID: 1 (0x0001) 00:26:20.830 Controller ID: 65535 (0xffff) 00:26:20.830 Admin Max SQ Size: 32 00:26:20.830 Transport Service Identifier: 4420 00:26:20.830 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:20.830 Transport Address: 10.0.0.1 00:26:20.830 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:20.830 get_feature(0x01) failed 00:26:20.830 get_feature(0x02) failed 00:26:20.830 get_feature(0x04) failed 00:26:20.830 ===================================================== 00:26:20.830 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:20.830 ===================================================== 00:26:20.830 Controller Capabilities/Features 00:26:20.830 ================================ 00:26:20.830 Vendor ID: 0000 00:26:20.830 Subsystem Vendor ID: 0000 00:26:20.830 Serial Number: c66b87fca38c67c3d416 00:26:20.830 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:20.830 Firmware Version: 6.8.9-20 00:26:20.830 Recommended Arb Burst: 6 00:26:20.830 IEEE OUI Identifier: 00 00 00 00:26:20.830 Multi-path I/O 00:26:20.830 May have multiple subsystem ports: Yes 00:26:20.830 May have multiple controllers: Yes 00:26:20.830 Associated with SR-IOV VF: No 00:26:20.830 Max Data Transfer Size: Unlimited 00:26:20.830 Max Number of Namespaces: 1024 00:26:20.830 Max Number of I/O Queues: 128 00:26:20.830 NVMe Specification Version (VS): 1.3 00:26:20.830 NVMe Specification Version (Identify): 1.3 00:26:20.830 Maximum Queue Entries: 1024 00:26:20.830 Contiguous Queues Required: No 00:26:20.830 Arbitration Mechanisms Supported 00:26:20.830 Weighted Round Robin: Not Supported 00:26:20.830 Vendor Specific: Not Supported 00:26:20.830 Reset Timeout: 7500 ms 00:26:20.830 Doorbell Stride: 4 bytes 00:26:20.830 NVM Subsystem Reset: Not Supported 00:26:20.830 Command Sets Supported 00:26:20.830 NVM Command Set: Supported 00:26:20.830 Boot Partition: Not Supported 00:26:20.830 Memory Page Size Minimum: 4096 bytes 00:26:20.830 Memory Page Size Maximum: 4096 bytes 00:26:20.830 Persistent Memory Region: Not Supported 00:26:20.830 Optional Asynchronous Events Supported 00:26:20.830 Namespace Attribute Notices: Supported 00:26:20.830 Firmware Activation Notices: Not Supported 00:26:20.830 ANA Change Notices: Supported 00:26:20.830 PLE Aggregate Log Change Notices: Not Supported 00:26:20.830 LBA Status Info Alert Notices: Not Supported 00:26:20.830 EGE Aggregate Log Change Notices: Not Supported 00:26:20.830 Normal NVM Subsystem Shutdown event: Not Supported 00:26:20.830 Zone Descriptor Change Notices: Not Supported 00:26:20.830 Discovery Log Change Notices: Not Supported 00:26:20.830 Controller Attributes 00:26:20.830 128-bit Host Identifier: Supported 00:26:20.830 Non-Operational Permissive Mode: Not Supported 00:26:20.830 NVM Sets: Not Supported 00:26:20.830 Read Recovery Levels: Not Supported 00:26:20.830 Endurance Groups: Not Supported 00:26:20.830 Predictable Latency Mode: Not Supported 00:26:20.830 Traffic Based Keep ALive: Supported 00:26:20.830 Namespace Granularity: Not Supported 00:26:20.830 SQ Associations: Not Supported 00:26:20.830 UUID List: Not Supported 00:26:20.830 Multi-Domain Subsystem: Not Supported 00:26:20.830 Fixed Capacity Management: Not Supported 00:26:20.830 Variable Capacity Management: Not Supported 00:26:20.830 Delete Endurance Group: Not Supported 00:26:20.830 Delete NVM Set: Not Supported 00:26:20.830 Extended LBA Formats Supported: Not Supported 00:26:20.830 Flexible Data Placement Supported: Not Supported 00:26:20.830 00:26:20.830 Controller Memory Buffer Support 00:26:20.830 ================================ 00:26:20.847 Supported: No 00:26:20.847 00:26:20.847 Persistent Memory Region Support 00:26:20.847 ================================ 00:26:20.847 Supported: No 00:26:20.847 00:26:20.847 Admin Command Set Attributes 00:26:20.847 ============================ 00:26:20.847 Security Send/Receive: Not Supported 00:26:20.847 Format NVM: Not Supported 00:26:20.847 Firmware Activate/Download: Not Supported 00:26:20.847 Namespace Management: Not Supported 00:26:20.847 Device Self-Test: Not Supported 00:26:20.847 Directives: Not Supported 00:26:20.847 NVMe-MI: Not Supported 00:26:20.847 Virtualization Management: Not Supported 00:26:20.847 Doorbell Buffer Config: Not Supported 00:26:20.847 Get LBA Status Capability: Not Supported 00:26:20.847 Command & Feature Lockdown Capability: Not Supported 00:26:20.847 Abort Command Limit: 4 00:26:20.847 Async Event Request Limit: 4 00:26:20.847 Number of Firmware Slots: N/A 00:26:20.847 Firmware Slot 1 Read-Only: N/A 00:26:20.847 Firmware Activation Without Reset: N/A 00:26:20.847 Multiple Update Detection Support: N/A 00:26:20.847 Firmware Update Granularity: No Information Provided 00:26:20.847 Per-Namespace SMART Log: Yes 00:26:20.847 Asymmetric Namespace Access Log Page: Supported 00:26:20.847 ANA Transition Time : 10 sec 00:26:20.847 00:26:20.847 Asymmetric Namespace Access Capabilities 00:26:20.847 ANA Optimized State : Supported 00:26:20.847 ANA Non-Optimized State : Supported 00:26:20.847 ANA Inaccessible State : Supported 00:26:20.847 ANA Persistent Loss State : Supported 00:26:20.847 ANA Change State : Supported 00:26:20.847 ANAGRPID is not changed : No 00:26:20.847 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:20.847 00:26:20.847 ANA Group Identifier Maximum : 128 00:26:20.847 Number of ANA Group Identifiers : 128 00:26:20.847 Max Number of Allowed Namespaces : 1024 00:26:20.847 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:20.847 Command Effects Log Page: Supported 00:26:20.847 Get Log Page Extended Data: Supported 00:26:20.847 Telemetry Log Pages: Not Supported 00:26:20.847 Persistent Event Log Pages: Not Supported 00:26:20.847 Supported Log Pages Log Page: May Support 00:26:20.847 Commands Supported & Effects Log Page: Not Supported 00:26:20.847 Feature Identifiers & Effects Log Page:May Support 00:26:20.847 NVMe-MI Commands & Effects Log Page: May Support 00:26:20.847 Data Area 4 for Telemetry Log: Not Supported 00:26:20.847 Error Log Page Entries Supported: 128 00:26:20.847 Keep Alive: Supported 00:26:20.847 Keep Alive Granularity: 1000 ms 00:26:20.847 00:26:20.847 NVM Command Set Attributes 00:26:20.847 ========================== 00:26:20.847 Submission Queue Entry Size 00:26:20.847 Max: 64 00:26:20.847 Min: 64 00:26:20.847 Completion Queue Entry Size 00:26:20.847 Max: 16 00:26:20.847 Min: 16 00:26:20.847 Number of Namespaces: 1024 00:26:20.847 Compare Command: Not Supported 00:26:20.847 Write Uncorrectable Command: Not Supported 00:26:20.847 Dataset Management Command: Supported 00:26:20.847 Write Zeroes Command: Supported 00:26:20.847 Set Features Save Field: Not Supported 00:26:20.847 Reservations: Not Supported 00:26:20.847 Timestamp: Not Supported 00:26:20.847 Copy: Not Supported 00:26:20.847 Volatile Write Cache: Present 00:26:20.847 Atomic Write Unit (Normal): 1 00:26:20.847 Atomic Write Unit (PFail): 1 00:26:20.847 Atomic Compare & Write Unit: 1 00:26:20.847 Fused Compare & Write: Not Supported 00:26:20.847 Scatter-Gather List 00:26:20.847 SGL Command Set: Supported 00:26:20.847 SGL Keyed: Not Supported 00:26:20.847 SGL Bit Bucket Descriptor: Not Supported 00:26:20.847 SGL Metadata Pointer: Not Supported 00:26:20.847 Oversized SGL: Not Supported 00:26:20.847 SGL Metadata Address: Not Supported 00:26:20.847 SGL Offset: Supported 00:26:20.847 Transport SGL Data Block: Not Supported 00:26:20.847 Replay Protected Memory Block: Not Supported 00:26:20.847 00:26:20.847 Firmware Slot Information 00:26:20.847 ========================= 00:26:20.847 Active slot: 0 00:26:20.847 00:26:20.847 Asymmetric Namespace Access 00:26:20.847 =========================== 00:26:20.847 Change Count : 0 00:26:20.847 Number of ANA Group Descriptors : 1 00:26:20.847 ANA Group Descriptor : 0 00:26:20.847 ANA Group ID : 1 00:26:20.847 Number of NSID Values : 1 00:26:20.847 Change Count : 0 00:26:20.847 ANA State : 1 00:26:20.847 Namespace Identifier : 1 00:26:20.847 00:26:20.847 Commands Supported and Effects 00:26:20.847 ============================== 00:26:20.848 Admin Commands 00:26:20.848 -------------- 00:26:20.848 Get Log Page (02h): Supported 00:26:20.848 Identify (06h): Supported 00:26:20.848 Abort (08h): Supported 00:26:20.848 Set Features (09h): Supported 00:26:20.848 Get Features (0Ah): Supported 00:26:20.848 Asynchronous Event Request (0Ch): Supported 00:26:20.848 Keep Alive (18h): Supported 00:26:20.848 I/O Commands 00:26:20.848 ------------ 00:26:20.848 Flush (00h): Supported 00:26:20.848 Write (01h): Supported LBA-Change 00:26:20.848 Read (02h): Supported 00:26:20.848 Write Zeroes (08h): Supported LBA-Change 00:26:20.848 Dataset Management (09h): Supported 00:26:20.848 00:26:20.848 Error Log 00:26:20.848 ========= 00:26:20.848 Entry: 0 00:26:20.848 Error Count: 0x3 00:26:20.848 Submission Queue Id: 0x0 00:26:20.848 Command Id: 0x5 00:26:20.848 Phase Bit: 0 00:26:20.848 Status Code: 0x2 00:26:20.848 Status Code Type: 0x0 00:26:20.848 Do Not Retry: 1 00:26:20.848 Error Location: 0x28 00:26:20.848 LBA: 0x0 00:26:20.848 Namespace: 0x0 00:26:20.848 Vendor Log Page: 0x0 00:26:20.848 ----------- 00:26:20.848 Entry: 1 00:26:20.848 Error Count: 0x2 00:26:20.848 Submission Queue Id: 0x0 00:26:20.848 Command Id: 0x5 00:26:20.848 Phase Bit: 0 00:26:20.848 Status Code: 0x2 00:26:20.848 Status Code Type: 0x0 00:26:20.848 Do Not Retry: 1 00:26:20.848 Error Location: 0x28 00:26:20.848 LBA: 0x0 00:26:20.848 Namespace: 0x0 00:26:20.848 Vendor Log Page: 0x0 00:26:20.848 ----------- 00:26:20.848 Entry: 2 00:26:20.848 Error Count: 0x1 00:26:20.848 Submission Queue Id: 0x0 00:26:20.848 Command Id: 0x4 00:26:20.848 Phase Bit: 0 00:26:20.848 Status Code: 0x2 00:26:20.848 Status Code Type: 0x0 00:26:20.848 Do Not Retry: 1 00:26:20.848 Error Location: 0x28 00:26:20.848 LBA: 0x0 00:26:20.848 Namespace: 0x0 00:26:20.848 Vendor Log Page: 0x0 00:26:20.848 00:26:20.848 Number of Queues 00:26:20.848 ================ 00:26:20.848 Number of I/O Submission Queues: 128 00:26:20.848 Number of I/O Completion Queues: 128 00:26:20.848 00:26:20.848 ZNS Specific Controller Data 00:26:20.848 ============================ 00:26:20.848 Zone Append Size Limit: 0 00:26:20.848 00:26:20.848 00:26:20.848 Active Namespaces 00:26:20.848 ================= 00:26:20.848 get_feature(0x05) failed 00:26:20.848 Namespace ID:1 00:26:20.848 Command Set Identifier: NVM (00h) 00:26:20.848 Deallocate: Supported 00:26:20.848 Deallocated/Unwritten Error: Not Supported 00:26:20.848 Deallocated Read Value: Unknown 00:26:20.848 Deallocate in Write Zeroes: Not Supported 00:26:20.848 Deallocated Guard Field: 0xFFFF 00:26:20.848 Flush: Supported 00:26:20.848 Reservation: Not Supported 00:26:20.848 Namespace Sharing Capabilities: Multiple Controllers 00:26:20.848 Size (in LBAs): 1953525168 (931GiB) 00:26:20.848 Capacity (in LBAs): 1953525168 (931GiB) 00:26:20.848 Utilization (in LBAs): 1953525168 (931GiB) 00:26:20.848 UUID: 29712ae9-9da3-4d38-b5a8-b1bce624141c 00:26:20.848 Thin Provisioning: Not Supported 00:26:20.848 Per-NS Atomic Units: Yes 00:26:20.848 Atomic Boundary Size (Normal): 0 00:26:20.848 Atomic Boundary Size (PFail): 0 00:26:20.848 Atomic Boundary Offset: 0 00:26:20.848 NGUID/EUI64 Never Reused: No 00:26:20.848 ANA group ID: 1 00:26:20.848 Namespace Write Protected: No 00:26:20.848 Number of LBA Formats: 1 00:26:20.848 Current LBA Format: LBA Format #00 00:26:20.848 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:20.848 00:26:20.848 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:20.848 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:20.848 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:20.848 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:20.848 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:20.848 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:20.848 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:20.848 rmmod nvme_tcp 00:26:21.109 rmmod nvme_fabrics 00:26:21.109 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:21.109 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:21.109 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:21.109 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:21.109 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:21.109 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:21.109 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:21.109 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:21.109 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:21.109 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:21.109 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:21.109 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:21.109 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:21.109 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.109 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.109 06:30:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.016 06:30:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:23.016 06:30:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:23.016 06:30:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:23.016 06:30:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:23.016 06:30:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:23.016 06:30:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:23.016 06:30:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:23.016 06:30:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:23.016 06:30:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:23.016 06:30:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:23.016 06:30:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:24.391 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:24.391 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:24.391 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:24.391 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:24.391 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:24.391 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:24.391 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:24.391 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:24.391 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:24.391 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:24.391 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:24.391 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:24.391 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:24.391 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:24.391 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:24.391 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:25.324 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:26:25.583 00:26:25.583 real 0m9.938s 00:26:25.583 user 0m2.270s 00:26:25.583 sys 0m3.576s 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:25.583 ************************************ 00:26:25.583 END TEST nvmf_identify_kernel_target 00:26:25.583 ************************************ 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.583 ************************************ 00:26:25.583 START TEST nvmf_auth_host 00:26:25.583 ************************************ 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:25.583 * Looking for test storage... 00:26:25.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:25.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.583 --rc genhtml_branch_coverage=1 00:26:25.583 --rc genhtml_function_coverage=1 00:26:25.583 --rc genhtml_legend=1 00:26:25.583 --rc geninfo_all_blocks=1 00:26:25.583 --rc geninfo_unexecuted_blocks=1 00:26:25.583 00:26:25.583 ' 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:25.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.583 --rc genhtml_branch_coverage=1 00:26:25.583 --rc genhtml_function_coverage=1 00:26:25.583 --rc genhtml_legend=1 00:26:25.583 --rc geninfo_all_blocks=1 00:26:25.583 --rc geninfo_unexecuted_blocks=1 00:26:25.583 00:26:25.583 ' 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:25.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.583 --rc genhtml_branch_coverage=1 00:26:25.583 --rc genhtml_function_coverage=1 00:26:25.583 --rc genhtml_legend=1 00:26:25.583 --rc geninfo_all_blocks=1 00:26:25.583 --rc geninfo_unexecuted_blocks=1 00:26:25.583 00:26:25.583 ' 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:25.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.583 --rc genhtml_branch_coverage=1 00:26:25.583 --rc genhtml_function_coverage=1 00:26:25.583 --rc genhtml_legend=1 00:26:25.583 --rc geninfo_all_blocks=1 00:26:25.583 --rc geninfo_unexecuted_blocks=1 00:26:25.583 00:26:25.583 ' 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:25.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:25.583 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:28.110 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.110 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:28.111 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:28.111 Found net devices under 0000:84:00.0: cvl_0_0 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:28.111 Found net devices under 0000:84:00.1: cvl_0_1 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:28.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:26:28.111 00:26:28.111 --- 10.0.0.2 ping statistics --- 00:26:28.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.111 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:26:28.111 00:26:28.111 --- 10.0.0.1 ping statistics --- 00:26:28.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.111 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:28.111 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.111 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1160133 00:26:28.111 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:28.111 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1160133 00:26:28.111 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1160133 ']' 00:26:28.111 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.111 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.111 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.112 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.112 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.469 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.469 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:28.469 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:28.469 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:28.469 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.469 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.469 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:28.469 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:28.469 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6a6e78a9f4f5fd1cc919c40d1683918d 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.HCP 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6a6e78a9f4f5fd1cc919c40d1683918d 0 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6a6e78a9f4f5fd1cc919c40d1683918d 0 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6a6e78a9f4f5fd1cc919c40d1683918d 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.HCP 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.HCP 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.HCP 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3df1da54476d3a3a2db84f142ad76d30ceefe9643b49fdac4eefb1c4ecb62cc5 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ojs 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3df1da54476d3a3a2db84f142ad76d30ceefe9643b49fdac4eefb1c4ecb62cc5 3 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3df1da54476d3a3a2db84f142ad76d30ceefe9643b49fdac4eefb1c4ecb62cc5 3 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3df1da54476d3a3a2db84f142ad76d30ceefe9643b49fdac4eefb1c4ecb62cc5 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ojs 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ojs 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ojs 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=66dcd39558d5721fe6b698dbea757bdf5f849317fdbaacdf 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.LRe 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 66dcd39558d5721fe6b698dbea757bdf5f849317fdbaacdf 0 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 66dcd39558d5721fe6b698dbea757bdf5f849317fdbaacdf 0 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=66dcd39558d5721fe6b698dbea757bdf5f849317fdbaacdf 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.LRe 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.LRe 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.LRe 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=294ff1cc8f44ed96e18a6efe725bbf1c209775c2026f9ef6 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.69B 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 294ff1cc8f44ed96e18a6efe725bbf1c209775c2026f9ef6 2 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 294ff1cc8f44ed96e18a6efe725bbf1c209775c2026f9ef6 2 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=294ff1cc8f44ed96e18a6efe725bbf1c209775c2026f9ef6 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.69B 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.69B 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.69B 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:28.470 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4d4fbf8125af5d1ee6bf7829085c9f47 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Dwa 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4d4fbf8125af5d1ee6bf7829085c9f47 1 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4d4fbf8125af5d1ee6bf7829085c9f47 1 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4d4fbf8125af5d1ee6bf7829085c9f47 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Dwa 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Dwa 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Dwa 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5d4eaa7c8705729ec42147531e6efe9f 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.CX7 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5d4eaa7c8705729ec42147531e6efe9f 1 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5d4eaa7c8705729ec42147531e6efe9f 1 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5d4eaa7c8705729ec42147531e6efe9f 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:28.471 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.CX7 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.CX7 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.CX7 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9e05ea72fb644b78886b558f0cd36ca8f3928266284d5d90 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.5E3 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9e05ea72fb644b78886b558f0cd36ca8f3928266284d5d90 2 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9e05ea72fb644b78886b558f0cd36ca8f3928266284d5d90 2 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9e05ea72fb644b78886b558f0cd36ca8f3928266284d5d90 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.5E3 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.5E3 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.5E3 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d25fa7490b79a56afa69aaa1d9198dfd 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.shR 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d25fa7490b79a56afa69aaa1d9198dfd 0 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d25fa7490b79a56afa69aaa1d9198dfd 0 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d25fa7490b79a56afa69aaa1d9198dfd 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.shR 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.shR 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.shR 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b9878c0b5d648535894eaddffa1d476eea8982c66d6c14555ecd0104ed697104 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.IQP 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b9878c0b5d648535894eaddffa1d476eea8982c66d6c14555ecd0104ed697104 3 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b9878c0b5d648535894eaddffa1d476eea8982c66d6c14555ecd0104ed697104 3 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b9878c0b5d648535894eaddffa1d476eea8982c66d6c14555ecd0104ed697104 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.IQP 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.IQP 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.IQP 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1160133 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1160133 ']' 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.776 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.035 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.035 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:29.035 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:29.035 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.HCP 00:26:29.035 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.035 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ojs ]] 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ojs 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.LRe 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.69B ]] 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.69B 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Dwa 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.CX7 ]] 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.CX7 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.035 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.5E3 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.shR ]] 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.shR 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.IQP 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:29.036 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:30.413 Waiting for block devices as requested 00:26:30.413 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:26:30.413 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:30.413 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:30.413 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:30.670 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:30.670 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:30.670 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:30.670 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:30.927 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:30.927 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:30.927 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:30.927 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:31.184 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:31.184 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:31.184 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:31.184 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:31.442 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:31.699 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:31.699 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:31.699 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:31.699 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:31.699 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:31.699 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:31.699 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:31.699 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:31.699 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:31.958 No valid GPT data, bailing 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:26:31.958 00:26:31.958 Discovery Log Number of Records 2, Generation counter 2 00:26:31.958 =====Discovery Log Entry 0====== 00:26:31.958 trtype: tcp 00:26:31.958 adrfam: ipv4 00:26:31.958 subtype: current discovery subsystem 00:26:31.958 treq: not specified, sq flow control disable supported 00:26:31.958 portid: 1 00:26:31.958 trsvcid: 4420 00:26:31.958 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:31.958 traddr: 10.0.0.1 00:26:31.958 eflags: none 00:26:31.958 sectype: none 00:26:31.958 =====Discovery Log Entry 1====== 00:26:31.958 trtype: tcp 00:26:31.958 adrfam: ipv4 00:26:31.958 subtype: nvme subsystem 00:26:31.958 treq: not specified, sq flow control disable supported 00:26:31.958 portid: 1 00:26:31.958 trsvcid: 4420 00:26:31.958 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:31.958 traddr: 10.0.0.1 00:26:31.958 eflags: none 00:26:31.958 sectype: none 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: ]] 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.958 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.216 nvme0n1 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.216 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: ]] 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.217 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.475 nvme0n1 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: ]] 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.475 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.476 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.734 nvme0n1 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: ]] 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.734 nvme0n1 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.734 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: ]] 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.993 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.993 nvme0n1 00:26:32.993 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.993 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.993 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.993 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.993 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.993 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.993 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.993 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.993 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.993 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.251 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.252 nvme0n1 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: ]] 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.252 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.509 nvme0n1 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: ]] 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.510 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.768 nvme0n1 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: ]] 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.768 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.026 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.026 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.026 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.026 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.026 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.026 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.026 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.026 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.026 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.026 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.026 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.026 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.026 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:34.026 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.026 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.026 nvme0n1 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: ]] 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.026 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.284 nvme0n1 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.284 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.543 nvme0n1 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: ]] 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.543 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.110 nvme0n1 00:26:35.110 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.110 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.110 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.110 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.110 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.110 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.110 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.110 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.110 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.110 06:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: ]] 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.110 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.368 nvme0n1 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: ]] 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.368 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.933 nvme0n1 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: ]] 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.934 06:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.192 nvme0n1 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.193 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.450 nvme0n1 00:26:36.450 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.450 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.450 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.450 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.450 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.450 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: ]] 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.708 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.709 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:36.709 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.709 06:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.274 nvme0n1 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: ]] 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.274 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.275 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.275 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.275 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.275 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.275 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.275 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.275 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.275 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.275 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.275 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:37.275 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.275 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.840 nvme0n1 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: ]] 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.840 06:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.420 nvme0n1 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: ]] 00:26:38.420 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.421 06:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.985 nvme0n1 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.985 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.550 nvme0n1 00:26:39.550 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.550 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.550 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.550 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.550 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.550 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: ]] 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.808 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.809 06:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.741 nvme0n1 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: ]] 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:40.741 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.742 06:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.673 nvme0n1 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: ]] 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:41.673 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.674 06:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.606 nvme0n1 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: ]] 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.606 06:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.539 nvme0n1 00:26:43.539 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.539 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.539 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.539 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.539 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.539 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.539 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.539 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.539 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.539 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.797 06:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.728 nvme0n1 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:44.728 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: ]] 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.729 nvme0n1 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.729 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: ]] 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.987 06:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.987 nvme0n1 00:26:44.987 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.987 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.987 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.987 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.987 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.987 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.987 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.987 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.987 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.987 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: ]] 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.244 nvme0n1 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.244 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: ]] 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.245 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.502 nvme0n1 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.502 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.503 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.503 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.503 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.503 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.503 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.503 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.503 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.503 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.503 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:45.503 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.503 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.759 nvme0n1 00:26:45.759 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.759 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.759 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: ]] 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.760 06:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.017 nvme0n1 00:26:46.017 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: ]] 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.018 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.276 nvme0n1 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: ]] 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.276 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.535 nvme0n1 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: ]] 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.535 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.794 nvme0n1 00:26:46.794 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.794 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.794 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.794 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.794 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.794 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.794 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.794 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.794 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.052 06:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.052 nvme0n1 00:26:47.052 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.052 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.052 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.052 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.052 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.052 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: ]] 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.310 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.311 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.569 nvme0n1 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: ]] 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.569 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.135 nvme0n1 00:26:48.135 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.135 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.135 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.135 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.135 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.135 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.135 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.135 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.135 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.135 06:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: ]] 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.135 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.394 nvme0n1 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: ]] 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.394 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.652 nvme0n1 00:26:48.652 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.652 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.652 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.652 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.652 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.652 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.652 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.652 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.652 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.652 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.911 06:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.170 nvme0n1 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: ]] 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.170 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.736 nvme0n1 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: ]] 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.736 06:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.302 nvme0n1 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: ]] 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.302 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.560 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.560 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.560 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.560 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.560 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.560 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.560 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.560 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.560 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.560 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.560 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.560 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.560 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:50.560 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.560 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.818 nvme0n1 00:26:50.818 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.818 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.818 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.818 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.818 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.818 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: ]] 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.077 06:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.644 nvme0n1 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.644 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.645 06:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.211 nvme0n1 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: ]] 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.211 06:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.145 nvme0n1 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: ]] 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.145 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.079 nvme0n1 00:26:54.079 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.079 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.079 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.079 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.079 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.079 06:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: ]] 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.079 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.080 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.080 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.080 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.080 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.080 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:54.080 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.080 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.011 nvme0n1 00:26:55.011 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.011 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.011 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.011 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.011 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.011 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.011 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.011 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: ]] 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.012 06:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.944 nvme0n1 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:55.944 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.945 06:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.877 nvme0n1 00:26:56.877 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.877 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.877 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.877 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.877 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.877 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.877 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.877 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.877 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.877 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.135 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.135 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:57.135 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.135 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.135 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:57.135 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.135 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.135 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.135 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:57.135 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:57.135 06:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: ]] 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.136 nvme0n1 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: ]] 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.136 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.393 nvme0n1 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: ]] 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.393 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.651 nvme0n1 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: ]] 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.651 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.910 nvme0n1 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.910 06:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.168 nvme0n1 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: ]] 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.168 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.427 nvme0n1 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: ]] 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.427 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.687 nvme0n1 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: ]] 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.687 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.688 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.947 nvme0n1 00:26:58.947 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.947 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.947 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.947 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.947 06:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: ]] 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.947 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.206 nvme0n1 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.206 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.465 nvme0n1 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.465 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.725 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.725 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:59.725 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.725 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:59.725 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.725 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.725 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.725 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:59.725 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:59.725 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:59.725 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.725 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.725 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:26:59.725 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: ]] 00:26:59.725 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:26:59.725 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.726 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.985 nvme0n1 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: ]] 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.985 06:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.245 nvme0n1 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: ]] 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.245 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.813 nvme0n1 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: ]] 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.813 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.814 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.814 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.814 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.814 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:00.814 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.814 06:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.072 nvme0n1 00:27:01.072 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.072 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.072 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.072 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.072 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.073 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.664 nvme0n1 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: ]] 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.664 06:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.232 nvme0n1 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: ]] 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.232 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.796 nvme0n1 00:27:02.796 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.796 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: ]] 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.797 06:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.363 nvme0n1 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: ]] 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.363 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.928 nvme0n1 00:27:03.928 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.928 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.928 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.928 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.928 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.928 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.928 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.928 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.928 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.928 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.928 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.928 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.928 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:03.928 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.928 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.928 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:03.928 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.929 06:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.495 nvme0n1 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZTc4YTlmNGY1ZmQxY2M5MTljNDBkMTY4MzkxOGTXJcpt: 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: ]] 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2RmMWRhNTQ0NzZkM2EzYTJkYjg0ZjE0MmFkNzZkMzBjZWVmZTk2NDNiNDlmZGFjNGVlZmIxYzRlY2I2MmNjNSXZhq0=: 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:04.495 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.496 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:04.496 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.496 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.496 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.496 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.496 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.496 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.496 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.496 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.496 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.496 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.496 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.496 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.496 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.496 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.496 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:04.496 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.496 06:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.430 nvme0n1 00:27:05.430 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.430 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.430 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.430 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.430 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.430 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.430 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.430 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.430 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.430 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.430 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.430 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.430 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:05.430 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.430 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: ]] 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.431 06:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.363 nvme0n1 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: ]] 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:06.363 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.364 06:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.342 nvme0n1 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWUwNWVhNzJmYjY0NGI3ODg4NmI1NThmMGNkMzZjYThmMzkyODI2NjI4NGQ1ZDkwqUD6NA==: 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: ]] 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZmE3NDkwYjc5YTU2YWZhNjlhYWExZDkxOThkZmSWDAVV: 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.342 06:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.334 nvme0n1 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk4NzhjMGI1ZDY0ODUzNTg5NGVhZGRmZmExZDQ3NmVlYTg5ODJjNjZkNmMxNDU1NWVjZDAxMDRlZDY5NzEwNHr/oqE=: 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.334 06:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.269 nvme0n1 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: ]] 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.269 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.269 request: 00:27:09.269 { 00:27:09.269 "name": "nvme0", 00:27:09.269 "trtype": "tcp", 00:27:09.269 "traddr": "10.0.0.1", 00:27:09.269 "adrfam": "ipv4", 00:27:09.269 "trsvcid": "4420", 00:27:09.269 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:09.269 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:09.269 "prchk_reftag": false, 00:27:09.527 "prchk_guard": false, 00:27:09.527 "hdgst": false, 00:27:09.527 "ddgst": false, 00:27:09.527 "allow_unrecognized_csi": false, 00:27:09.527 "method": "bdev_nvme_attach_controller", 00:27:09.527 "req_id": 1 00:27:09.527 } 00:27:09.527 Got JSON-RPC error response 00:27:09.527 response: 00:27:09.527 { 00:27:09.527 "code": -5, 00:27:09.527 "message": "Input/output error" 00:27:09.527 } 00:27:09.527 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:09.527 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:09.527 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:09.527 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:09.527 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:09.527 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.527 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.527 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:09.527 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.527 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.527 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:09.527 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:09.527 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.528 request: 00:27:09.528 { 00:27:09.528 "name": "nvme0", 00:27:09.528 "trtype": "tcp", 00:27:09.528 "traddr": "10.0.0.1", 00:27:09.528 "adrfam": "ipv4", 00:27:09.528 "trsvcid": "4420", 00:27:09.528 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:09.528 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:09.528 "prchk_reftag": false, 00:27:09.528 "prchk_guard": false, 00:27:09.528 "hdgst": false, 00:27:09.528 "ddgst": false, 00:27:09.528 "dhchap_key": "key2", 00:27:09.528 "allow_unrecognized_csi": false, 00:27:09.528 "method": "bdev_nvme_attach_controller", 00:27:09.528 "req_id": 1 00:27:09.528 } 00:27:09.528 Got JSON-RPC error response 00:27:09.528 response: 00:27:09.528 { 00:27:09.528 "code": -5, 00:27:09.528 "message": "Input/output error" 00:27:09.528 } 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.528 request: 00:27:09.528 { 00:27:09.528 "name": "nvme0", 00:27:09.528 "trtype": "tcp", 00:27:09.528 "traddr": "10.0.0.1", 00:27:09.528 "adrfam": "ipv4", 00:27:09.528 "trsvcid": "4420", 00:27:09.528 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:09.528 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:09.528 "prchk_reftag": false, 00:27:09.528 "prchk_guard": false, 00:27:09.528 "hdgst": false, 00:27:09.528 "ddgst": false, 00:27:09.528 "dhchap_key": "key1", 00:27:09.528 "dhchap_ctrlr_key": "ckey2", 00:27:09.528 "allow_unrecognized_csi": false, 00:27:09.528 "method": "bdev_nvme_attach_controller", 00:27:09.528 "req_id": 1 00:27:09.528 } 00:27:09.528 Got JSON-RPC error response 00:27:09.528 response: 00:27:09.528 { 00:27:09.528 "code": -5, 00:27:09.528 "message": "Input/output error" 00:27:09.528 } 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.528 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.787 nvme0n1 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: ]] 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:09.787 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:10.046 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.046 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:10.046 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.046 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.046 request: 00:27:10.046 { 00:27:10.046 "name": "nvme0", 00:27:10.046 "dhchap_key": "key1", 00:27:10.046 "dhchap_ctrlr_key": "ckey2", 00:27:10.046 "method": "bdev_nvme_set_keys", 00:27:10.046 "req_id": 1 00:27:10.046 } 00:27:10.046 Got JSON-RPC error response 00:27:10.046 response: 00:27:10.046 { 00:27:10.046 "code": -13, 00:27:10.046 "message": "Permission denied" 00:27:10.046 } 00:27:10.046 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:10.046 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:10.046 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:10.046 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:10.046 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:10.046 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.046 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.046 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.046 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:10.046 06:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.046 06:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:10.046 06:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:10.979 06:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.979 06:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:10.979 06:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.979 06:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.979 06:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.979 06:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:10.979 06:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjZkY2QzOTU1OGQ1NzIxZmU2YjY5OGRiZWE3NTdiZGY1Zjg0OTMxN2ZkYmFhY2Rm2P+0XQ==: 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: ]] 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0ZmYxY2M4ZjQ0ZWQ5NmUxOGE2ZWZlNzI1YmJmMWMyMDk3NzVjMjAyNmY5ZWY2K5nYcg==: 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.348 nvme0n1 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGQ0ZmJmODEyNWFmNWQxZWU2YmY3ODI5MDg1YzlmNDcm7t6G: 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: ]] 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ0ZWFhN2M4NzA1NzI5ZWM0MjE0NzUzMWU2ZWZlOWY13nUw: 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:12.348 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:12.349 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:12.349 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:12.349 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.349 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.349 request: 00:27:12.349 { 00:27:12.349 "name": "nvme0", 00:27:12.349 "dhchap_key": "key2", 00:27:12.349 "dhchap_ctrlr_key": "ckey1", 00:27:12.349 "method": "bdev_nvme_set_keys", 00:27:12.349 "req_id": 1 00:27:12.349 } 00:27:12.349 Got JSON-RPC error response 00:27:12.349 response: 00:27:12.349 { 00:27:12.349 "code": -13, 00:27:12.349 "message": "Permission denied" 00:27:12.349 } 00:27:12.349 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:12.349 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:12.349 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:12.349 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:12.349 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:12.349 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.349 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:12.349 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.349 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.349 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.349 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:12.349 06:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:13.281 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.281 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:13.281 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.281 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.281 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:13.539 rmmod nvme_tcp 00:27:13.539 rmmod nvme_fabrics 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1160133 ']' 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1160133 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1160133 ']' 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1160133 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1160133 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1160133' 00:27:13.539 killing process with pid 1160133 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1160133 00:27:13.539 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1160133 00:27:13.799 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:13.799 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:13.799 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:13.799 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:13.799 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:13.799 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:13.799 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:13.799 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:13.799 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:13.799 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.799 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.799 06:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.702 06:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:15.702 06:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:15.702 06:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:15.702 06:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:15.702 06:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:15.702 06:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:15.702 06:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:15.702 06:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:15.702 06:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:15.702 06:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:15.702 06:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:15.702 06:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:15.702 06:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:17.077 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:17.077 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:17.077 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:17.077 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:17.077 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:17.077 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:17.077 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:17.077 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:17.077 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:17.077 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:17.077 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:17.077 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:17.336 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:17.336 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:17.336 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:17.336 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:18.274 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:27:18.274 06:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.HCP /tmp/spdk.key-null.LRe /tmp/spdk.key-sha256.Dwa /tmp/spdk.key-sha384.5E3 /tmp/spdk.key-sha512.IQP /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:18.274 06:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:19.650 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:19.650 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:19.650 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:19.650 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:19.650 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:19.650 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:19.650 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:19.650 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:19.650 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:19.650 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:19.650 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:19.650 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:19.650 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:19.650 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:19.650 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:19.650 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:19.650 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:19.650 00:27:19.650 real 0m54.110s 00:27:19.650 user 0m51.436s 00:27:19.650 sys 0m6.413s 00:27:19.650 06:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:19.650 06:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.650 ************************************ 00:27:19.650 END TEST nvmf_auth_host 00:27:19.650 ************************************ 00:27:19.650 06:31:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:19.650 06:31:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:19.650 06:31:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:19.650 06:31:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:19.650 06:31:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.650 ************************************ 00:27:19.650 START TEST nvmf_digest 00:27:19.650 ************************************ 00:27:19.650 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:19.650 * Looking for test storage... 00:27:19.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:19.650 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:19.650 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:27:19.650 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:19.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.910 --rc genhtml_branch_coverage=1 00:27:19.910 --rc genhtml_function_coverage=1 00:27:19.910 --rc genhtml_legend=1 00:27:19.910 --rc geninfo_all_blocks=1 00:27:19.910 --rc geninfo_unexecuted_blocks=1 00:27:19.910 00:27:19.910 ' 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:19.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.910 --rc genhtml_branch_coverage=1 00:27:19.910 --rc genhtml_function_coverage=1 00:27:19.910 --rc genhtml_legend=1 00:27:19.910 --rc geninfo_all_blocks=1 00:27:19.910 --rc geninfo_unexecuted_blocks=1 00:27:19.910 00:27:19.910 ' 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:19.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.910 --rc genhtml_branch_coverage=1 00:27:19.910 --rc genhtml_function_coverage=1 00:27:19.910 --rc genhtml_legend=1 00:27:19.910 --rc geninfo_all_blocks=1 00:27:19.910 --rc geninfo_unexecuted_blocks=1 00:27:19.910 00:27:19.910 ' 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:19.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.910 --rc genhtml_branch_coverage=1 00:27:19.910 --rc genhtml_function_coverage=1 00:27:19.910 --rc genhtml_legend=1 00:27:19.910 --rc geninfo_all_blocks=1 00:27:19.910 --rc geninfo_unexecuted_blocks=1 00:27:19.910 00:27:19.910 ' 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:19.910 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:19.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:19.911 06:31:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:22.447 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:22.447 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.447 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:22.447 Found net devices under 0000:84:00.0: cvl_0_0 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:22.448 Found net devices under 0000:84:00.1: cvl_0_1 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:22.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:27:22.448 00:27:22.448 --- 10.0.0.2 ping statistics --- 00:27:22.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.448 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:22.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:27:22.448 00:27:22.448 --- 10.0.0.1 ping statistics --- 00:27:22.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.448 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:22.448 ************************************ 00:27:22.448 START TEST nvmf_digest_clean 00:27:22.448 ************************************ 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1170165 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1170165 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1170165 ']' 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:22.448 [2024-12-08 06:31:12.258258] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:27:22.448 [2024-12-08 06:31:12.258346] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.448 [2024-12-08 06:31:12.329712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.448 [2024-12-08 06:31:12.385669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.448 [2024-12-08 06:31:12.385748] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.448 [2024-12-08 06:31:12.385775] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.448 [2024-12-08 06:31:12.385787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.448 [2024-12-08 06:31:12.385797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.448 [2024-12-08 06:31:12.386416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.448 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:22.707 null0 00:27:22.707 [2024-12-08 06:31:12.623660] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.707 [2024-12-08 06:31:12.647919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.707 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.707 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:22.707 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:22.707 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:22.707 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:22.707 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:22.707 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:22.707 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:22.707 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1170193 00:27:22.707 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1170193 /var/tmp/bperf.sock 00:27:22.707 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:22.707 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1170193 ']' 00:27:22.707 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:22.707 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:22.707 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:22.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:22.707 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:22.707 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:22.707 [2024-12-08 06:31:12.696252] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:27:22.707 [2024-12-08 06:31:12.696317] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170193 ] 00:27:22.707 [2024-12-08 06:31:12.765786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.707 [2024-12-08 06:31:12.826095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.965 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:22.965 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:22.965 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:22.965 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:22.965 06:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:23.531 06:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:23.531 06:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:23.789 nvme0n1 00:27:23.789 06:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:23.789 06:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:24.047 Running I/O for 2 seconds... 00:27:25.916 19473.00 IOPS, 76.07 MiB/s [2024-12-08T05:31:16.035Z] 19939.00 IOPS, 77.89 MiB/s 00:27:25.916 Latency(us) 00:27:25.916 [2024-12-08T05:31:16.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.916 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:25.916 nvme0n1 : 2.01 19957.65 77.96 0.00 0.00 6404.45 3082.62 16019.91 00:27:25.916 [2024-12-08T05:31:16.035Z] =================================================================================================================== 00:27:25.916 [2024-12-08T05:31:16.035Z] Total : 19957.65 77.96 0.00 0.00 6404.45 3082.62 16019.91 00:27:25.916 { 00:27:25.916 "results": [ 00:27:25.916 { 00:27:25.916 "job": "nvme0n1", 00:27:25.916 "core_mask": "0x2", 00:27:25.916 "workload": "randread", 00:27:25.916 "status": "finished", 00:27:25.916 "queue_depth": 128, 00:27:25.916 "io_size": 4096, 00:27:25.916 "runtime": 2.005998, 00:27:25.916 "iops": 19957.647016597224, 00:27:25.916 "mibps": 77.9595586585829, 00:27:25.916 "io_failed": 0, 00:27:25.916 "io_timeout": 0, 00:27:25.916 "avg_latency_us": 6404.4452671320005, 00:27:25.916 "min_latency_us": 3082.6192592592593, 00:27:25.916 "max_latency_us": 16019.91111111111 00:27:25.916 } 00:27:25.916 ], 00:27:25.916 "core_count": 1 00:27:25.916 } 00:27:25.916 06:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:25.916 06:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:25.916 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:25.916 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:25.916 | select(.opcode=="crc32c") 00:27:25.916 | "\(.module_name) \(.executed)"' 00:27:25.916 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:26.175 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:26.175 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:26.175 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:26.175 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:26.175 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1170193 00:27:26.175 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1170193 ']' 00:27:26.175 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1170193 00:27:26.175 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:26.433 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:26.433 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1170193 00:27:26.433 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:26.433 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:26.433 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1170193' 00:27:26.433 killing process with pid 1170193 00:27:26.433 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1170193 00:27:26.433 Received shutdown signal, test time was about 2.000000 seconds 00:27:26.433 00:27:26.433 Latency(us) 00:27:26.433 [2024-12-08T05:31:16.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.433 [2024-12-08T05:31:16.552Z] =================================================================================================================== 00:27:26.433 [2024-12-08T05:31:16.552Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:26.434 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1170193 00:27:26.691 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:26.691 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:26.691 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:26.691 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:26.691 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:26.691 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:26.691 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:26.691 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1170715 00:27:26.692 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:26.692 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1170715 /var/tmp/bperf.sock 00:27:26.692 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1170715 ']' 00:27:26.692 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:26.692 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:26.692 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:26.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:26.692 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:26.692 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:26.692 [2024-12-08 06:31:16.607878] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:27:26.692 [2024-12-08 06:31:16.607954] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170715 ] 00:27:26.692 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:26.692 Zero copy mechanism will not be used. 00:27:26.692 [2024-12-08 06:31:16.674506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.692 [2024-12-08 06:31:16.730093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.949 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:26.949 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:26.949 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:26.949 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:26.949 06:31:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:27.206 06:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:27.206 06:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:27.771 nvme0n1 00:27:27.771 06:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:27.771 06:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:27.771 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:27.771 Zero copy mechanism will not be used. 00:27:27.771 Running I/O for 2 seconds... 00:27:30.078 4825.00 IOPS, 603.12 MiB/s [2024-12-08T05:31:20.197Z] 4849.50 IOPS, 606.19 MiB/s 00:27:30.078 Latency(us) 00:27:30.078 [2024-12-08T05:31:20.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.078 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:30.078 nvme0n1 : 2.00 4852.57 606.57 0.00 0.00 3292.88 825.27 6796.33 00:27:30.078 [2024-12-08T05:31:20.197Z] =================================================================================================================== 00:27:30.078 [2024-12-08T05:31:20.197Z] Total : 4852.57 606.57 0.00 0.00 3292.88 825.27 6796.33 00:27:30.078 { 00:27:30.078 "results": [ 00:27:30.078 { 00:27:30.078 "job": "nvme0n1", 00:27:30.078 "core_mask": "0x2", 00:27:30.078 "workload": "randread", 00:27:30.078 "status": "finished", 00:27:30.078 "queue_depth": 16, 00:27:30.078 "io_size": 131072, 00:27:30.078 "runtime": 2.004915, 00:27:30.078 "iops": 4852.574797435303, 00:27:30.078 "mibps": 606.5718496794128, 00:27:30.078 "io_failed": 0, 00:27:30.078 "io_timeout": 0, 00:27:30.078 "avg_latency_us": 3292.8753845509605, 00:27:30.078 "min_latency_us": 825.2681481481482, 00:27:30.078 "max_latency_us": 6796.325925925926 00:27:30.078 } 00:27:30.078 ], 00:27:30.078 "core_count": 1 00:27:30.079 } 00:27:30.079 06:31:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:30.079 06:31:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:30.079 06:31:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:30.079 06:31:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:30.079 | select(.opcode=="crc32c") 00:27:30.079 | "\(.module_name) \(.executed)"' 00:27:30.079 06:31:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:30.079 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:30.079 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:30.079 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:30.079 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:30.079 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1170715 00:27:30.079 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1170715 ']' 00:27:30.079 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1170715 00:27:30.079 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:30.079 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:30.079 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1170715 00:27:30.079 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:30.079 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:30.079 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1170715' 00:27:30.079 killing process with pid 1170715 00:27:30.079 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1170715 00:27:30.079 Received shutdown signal, test time was about 2.000000 seconds 00:27:30.079 00:27:30.079 Latency(us) 00:27:30.079 [2024-12-08T05:31:20.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.079 [2024-12-08T05:31:20.198Z] =================================================================================================================== 00:27:30.079 [2024-12-08T05:31:20.198Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:30.079 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1170715 00:27:30.338 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:30.338 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:30.338 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:30.338 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:30.338 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:30.338 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:30.338 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:30.338 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1171127 00:27:30.338 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:30.338 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1171127 /var/tmp/bperf.sock 00:27:30.338 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1171127 ']' 00:27:30.338 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:30.338 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.338 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:30.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:30.338 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.338 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:30.596 [2024-12-08 06:31:20.471198] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:27:30.596 [2024-12-08 06:31:20.471287] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1171127 ] 00:27:30.596 [2024-12-08 06:31:20.536617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.596 [2024-12-08 06:31:20.590863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.596 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:30.596 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:30.596 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:30.596 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:30.596 06:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:31.163 06:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:31.163 06:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:31.421 nvme0n1 00:27:31.421 06:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:31.421 06:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:31.679 Running I/O for 2 seconds... 00:27:33.547 23216.00 IOPS, 90.69 MiB/s [2024-12-08T05:31:23.666Z] 23322.00 IOPS, 91.10 MiB/s 00:27:33.547 Latency(us) 00:27:33.547 [2024-12-08T05:31:23.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.547 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:33.547 nvme0n1 : 2.00 23328.79 91.13 0.00 0.00 5478.15 2597.17 10145.94 00:27:33.547 [2024-12-08T05:31:23.666Z] =================================================================================================================== 00:27:33.547 [2024-12-08T05:31:23.666Z] Total : 23328.79 91.13 0.00 0.00 5478.15 2597.17 10145.94 00:27:33.547 { 00:27:33.547 "results": [ 00:27:33.547 { 00:27:33.547 "job": "nvme0n1", 00:27:33.547 "core_mask": "0x2", 00:27:33.547 "workload": "randwrite", 00:27:33.547 "status": "finished", 00:27:33.547 "queue_depth": 128, 00:27:33.547 "io_size": 4096, 00:27:33.547 "runtime": 2.004905, 00:27:33.547 "iops": 23328.78615196231, 00:27:33.547 "mibps": 91.12807090610278, 00:27:33.547 "io_failed": 0, 00:27:33.547 "io_timeout": 0, 00:27:33.547 "avg_latency_us": 5478.14749565267, 00:27:33.547 "min_latency_us": 2597.1674074074076, 00:27:33.547 "max_latency_us": 10145.943703703704 00:27:33.547 } 00:27:33.547 ], 00:27:33.548 "core_count": 1 00:27:33.548 } 00:27:33.548 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:33.548 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:33.548 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:33.548 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:33.548 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:33.548 | select(.opcode=="crc32c") 00:27:33.548 | "\(.module_name) \(.executed)"' 00:27:33.806 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:33.806 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:33.806 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:33.806 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:33.806 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1171127 00:27:33.806 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1171127 ']' 00:27:33.806 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1171127 00:27:33.806 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:33.806 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.806 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1171127 00:27:33.806 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:33.806 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:33.806 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1171127' 00:27:33.806 killing process with pid 1171127 00:27:33.806 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1171127 00:27:33.806 Received shutdown signal, test time was about 2.000000 seconds 00:27:33.806 00:27:33.806 Latency(us) 00:27:33.806 [2024-12-08T05:31:23.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.806 [2024-12-08T05:31:23.925Z] =================================================================================================================== 00:27:33.806 [2024-12-08T05:31:23.925Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:33.806 06:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1171127 00:27:34.064 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:34.064 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:34.064 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:34.064 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:34.064 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:34.064 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:34.064 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:34.064 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1171602 00:27:34.064 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1171602 /var/tmp/bperf.sock 00:27:34.064 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:34.064 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1171602 ']' 00:27:34.064 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:34.064 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:34.064 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:34.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:34.064 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:34.064 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:34.064 [2024-12-08 06:31:24.163144] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:27:34.064 [2024-12-08 06:31:24.163249] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1171602 ] 00:27:34.064 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:34.064 Zero copy mechanism will not be used. 00:27:34.324 [2024-12-08 06:31:24.231777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.324 [2024-12-08 06:31:24.288558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.324 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.324 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:34.324 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:34.324 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:34.324 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:34.891 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:34.891 06:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:35.154 nvme0n1 00:27:35.154 06:31:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:35.154 06:31:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:35.412 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:35.412 Zero copy mechanism will not be used. 00:27:35.412 Running I/O for 2 seconds... 00:27:37.333 4708.00 IOPS, 588.50 MiB/s [2024-12-08T05:31:27.452Z] 4824.50 IOPS, 603.06 MiB/s 00:27:37.333 Latency(us) 00:27:37.333 [2024-12-08T05:31:27.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.333 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:37.333 nvme0n1 : 2.00 4825.00 603.12 0.00 0.00 3308.88 1990.35 11845.03 00:27:37.333 [2024-12-08T05:31:27.452Z] =================================================================================================================== 00:27:37.333 [2024-12-08T05:31:27.452Z] Total : 4825.00 603.12 0.00 0.00 3308.88 1990.35 11845.03 00:27:37.333 { 00:27:37.333 "results": [ 00:27:37.333 { 00:27:37.333 "job": "nvme0n1", 00:27:37.334 "core_mask": "0x2", 00:27:37.334 "workload": "randwrite", 00:27:37.334 "status": "finished", 00:27:37.334 "queue_depth": 16, 00:27:37.334 "io_size": 131072, 00:27:37.334 "runtime": 2.003938, 00:27:37.334 "iops": 4824.999575835181, 00:27:37.334 "mibps": 603.1249469793976, 00:27:37.334 "io_failed": 0, 00:27:37.334 "io_timeout": 0, 00:27:37.334 "avg_latency_us": 3308.87727667268, 00:27:37.334 "min_latency_us": 1990.3525925925926, 00:27:37.334 "max_latency_us": 11845.025185185184 00:27:37.334 } 00:27:37.334 ], 00:27:37.334 "core_count": 1 00:27:37.334 } 00:27:37.334 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:37.334 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:37.334 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:37.334 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:37.334 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:37.334 | select(.opcode=="crc32c") 00:27:37.334 | "\(.module_name) \(.executed)"' 00:27:37.592 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:37.592 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:37.592 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:37.592 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:37.592 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1171602 00:27:37.592 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1171602 ']' 00:27:37.592 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1171602 00:27:37.592 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:37.592 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:37.592 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1171602 00:27:37.592 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:37.592 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:37.592 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1171602' 00:27:37.592 killing process with pid 1171602 00:27:37.592 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1171602 00:27:37.592 Received shutdown signal, test time was about 2.000000 seconds 00:27:37.592 00:27:37.592 Latency(us) 00:27:37.592 [2024-12-08T05:31:27.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.592 [2024-12-08T05:31:27.711Z] =================================================================================================================== 00:27:37.592 [2024-12-08T05:31:27.711Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:37.592 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1171602 00:27:37.850 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1170165 00:27:37.850 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1170165 ']' 00:27:37.850 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1170165 00:27:37.850 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:37.850 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:37.850 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1170165 00:27:37.850 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:37.850 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:37.850 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1170165' 00:27:37.850 killing process with pid 1170165 00:27:37.850 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1170165 00:27:37.850 06:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1170165 00:27:38.109 00:27:38.110 real 0m15.957s 00:27:38.110 user 0m31.363s 00:27:38.110 sys 0m5.114s 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:38.110 ************************************ 00:27:38.110 END TEST nvmf_digest_clean 00:27:38.110 ************************************ 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:38.110 ************************************ 00:27:38.110 START TEST nvmf_digest_error 00:27:38.110 ************************************ 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1172095 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1172095 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1172095 ']' 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:38.110 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:38.369 [2024-12-08 06:31:28.274071] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:27:38.369 [2024-12-08 06:31:28.274163] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.369 [2024-12-08 06:31:28.344068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.369 [2024-12-08 06:31:28.395342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.369 [2024-12-08 06:31:28.395410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.369 [2024-12-08 06:31:28.395435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.369 [2024-12-08 06:31:28.395446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.369 [2024-12-08 06:31:28.395455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.369 [2024-12-08 06:31:28.396075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:38.627 [2024-12-08 06:31:28.524801] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:38.627 null0 00:27:38.627 [2024-12-08 06:31:28.649640] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.627 [2024-12-08 06:31:28.673884] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1172200 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1172200 /var/tmp/bperf.sock 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1172200 ']' 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:38.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:38.627 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:38.627 [2024-12-08 06:31:28.725548] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:27:38.627 [2024-12-08 06:31:28.725628] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1172200 ] 00:27:38.885 [2024-12-08 06:31:28.800838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.885 [2024-12-08 06:31:28.859625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.885 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.885 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:38.885 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:38.885 06:31:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:39.143 06:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:39.143 06:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.143 06:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:39.143 06:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.143 06:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:39.143 06:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:39.709 nvme0n1 00:27:39.709 06:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:39.709 06:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.709 06:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:39.709 06:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.709 06:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:39.709 06:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:39.709 Running I/O for 2 seconds... 00:27:39.709 [2024-12-08 06:31:29.739128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.709 [2024-12-08 06:31:29.739174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.709 [2024-12-08 06:31:29.739198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.709 [2024-12-08 06:31:29.754831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.709 [2024-12-08 06:31:29.754862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.709 [2024-12-08 06:31:29.754879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.709 [2024-12-08 06:31:29.770382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.709 [2024-12-08 06:31:29.770412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.709 [2024-12-08 06:31:29.770428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.709 [2024-12-08 06:31:29.782671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.709 [2024-12-08 06:31:29.782733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.709 [2024-12-08 06:31:29.782752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.709 [2024-12-08 06:31:29.793857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.709 [2024-12-08 06:31:29.793890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.709 [2024-12-08 06:31:29.793909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.709 [2024-12-08 06:31:29.807972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.709 [2024-12-08 06:31:29.808018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.709 [2024-12-08 06:31:29.808036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.709 [2024-12-08 06:31:29.820975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.709 [2024-12-08 06:31:29.821020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.709 [2024-12-08 06:31:29.821048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.968 [2024-12-08 06:31:29.832036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.968 [2024-12-08 06:31:29.832088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.968 [2024-12-08 06:31:29.832106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.968 [2024-12-08 06:31:29.843957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.968 [2024-12-08 06:31:29.843987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.968 [2024-12-08 06:31:29.844019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.968 [2024-12-08 06:31:29.855605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.968 [2024-12-08 06:31:29.855632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.968 [2024-12-08 06:31:29.855663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.968 [2024-12-08 06:31:29.868442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.968 [2024-12-08 06:31:29.868470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.968 [2024-12-08 06:31:29.868502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.968 [2024-12-08 06:31:29.883863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.968 [2024-12-08 06:31:29.883893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.968 [2024-12-08 06:31:29.883910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.968 [2024-12-08 06:31:29.898918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.968 [2024-12-08 06:31:29.898948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.968 [2024-12-08 06:31:29.898965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.968 [2024-12-08 06:31:29.914499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.968 [2024-12-08 06:31:29.914527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.968 [2024-12-08 06:31:29.914558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.968 [2024-12-08 06:31:29.926632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.968 [2024-12-08 06:31:29.926659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.968 [2024-12-08 06:31:29.926691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.968 [2024-12-08 06:31:29.936737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.968 [2024-12-08 06:31:29.936782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.968 [2024-12-08 06:31:29.936800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.968 [2024-12-08 06:31:29.951756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.968 [2024-12-08 06:31:29.951785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.968 [2024-12-08 06:31:29.951802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.968 [2024-12-08 06:31:29.964449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.968 [2024-12-08 06:31:29.964476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.968 [2024-12-08 06:31:29.964506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.968 [2024-12-08 06:31:29.975031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.968 [2024-12-08 06:31:29.975059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.968 [2024-12-08 06:31:29.975089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.968 [2024-12-08 06:31:29.986256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.968 [2024-12-08 06:31:29.986284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.969 [2024-12-08 06:31:29.986314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.969 [2024-12-08 06:31:29.998230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.969 [2024-12-08 06:31:29.998258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.969 [2024-12-08 06:31:29.998288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.969 [2024-12-08 06:31:30.011917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.969 [2024-12-08 06:31:30.011953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.969 [2024-12-08 06:31:30.011971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.969 [2024-12-08 06:31:30.024520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.969 [2024-12-08 06:31:30.024570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.969 [2024-12-08 06:31:30.024588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.969 [2024-12-08 06:31:30.040643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.969 [2024-12-08 06:31:30.040693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.969 [2024-12-08 06:31:30.040710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.969 [2024-12-08 06:31:30.057352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.969 [2024-12-08 06:31:30.057386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.969 [2024-12-08 06:31:30.057432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.969 [2024-12-08 06:31:30.071984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.969 [2024-12-08 06:31:30.072016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.969 [2024-12-08 06:31:30.072034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.969 [2024-12-08 06:31:30.085337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:39.969 [2024-12-08 06:31:30.085369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.969 [2024-12-08 06:31:30.085386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.227 [2024-12-08 06:31:30.096760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.227 [2024-12-08 06:31:30.096790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.227 [2024-12-08 06:31:30.096808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.227 [2024-12-08 06:31:30.108800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.227 [2024-12-08 06:31:30.108828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.227 [2024-12-08 06:31:30.108860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.227 [2024-12-08 06:31:30.118530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.227 [2024-12-08 06:31:30.118557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.227 [2024-12-08 06:31:30.118588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.227 [2024-12-08 06:31:30.132400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.227 [2024-12-08 06:31:30.132428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.227 [2024-12-08 06:31:30.132459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.227 [2024-12-08 06:31:30.147198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.227 [2024-12-08 06:31:30.147226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.227 [2024-12-08 06:31:30.147256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.227 [2024-12-08 06:31:30.160894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.227 [2024-12-08 06:31:30.160922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.227 [2024-12-08 06:31:30.160938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.227 [2024-12-08 06:31:30.174508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.227 [2024-12-08 06:31:30.174542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.227 [2024-12-08 06:31:30.174574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.227 [2024-12-08 06:31:30.190212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.227 [2024-12-08 06:31:30.190240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.227 [2024-12-08 06:31:30.190272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.227 [2024-12-08 06:31:30.200356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.227 [2024-12-08 06:31:30.200384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.227 [2024-12-08 06:31:30.200415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.227 [2024-12-08 06:31:30.214991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.227 [2024-12-08 06:31:30.215034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.227 [2024-12-08 06:31:30.215051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.227 [2024-12-08 06:31:30.229988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.227 [2024-12-08 06:31:30.230016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.227 [2024-12-08 06:31:30.230045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.227 [2024-12-08 06:31:30.243863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.227 [2024-12-08 06:31:30.243891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.227 [2024-12-08 06:31:30.243908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.227 [2024-12-08 06:31:30.253728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.227 [2024-12-08 06:31:30.253755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.227 [2024-12-08 06:31:30.253771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.227 [2024-12-08 06:31:30.266901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.227 [2024-12-08 06:31:30.266932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.227 [2024-12-08 06:31:30.266950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.227 [2024-12-08 06:31:30.281368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.227 [2024-12-08 06:31:30.281395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.227 [2024-12-08 06:31:30.281426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.227 [2024-12-08 06:31:30.294242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.227 [2024-12-08 06:31:30.294269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.227 [2024-12-08 06:31:30.294299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.227 [2024-12-08 06:31:30.304313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.227 [2024-12-08 06:31:30.304340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.227 [2024-12-08 06:31:30.304371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.227 [2024-12-08 06:31:30.317551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.227 [2024-12-08 06:31:30.317577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.227 [2024-12-08 06:31:30.317607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.228 [2024-12-08 06:31:30.328139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.228 [2024-12-08 06:31:30.328165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.228 [2024-12-08 06:31:30.328195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.228 [2024-12-08 06:31:30.342517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.228 [2024-12-08 06:31:30.342555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.228 [2024-12-08 06:31:30.342583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.486 [2024-12-08 06:31:30.357120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.486 [2024-12-08 06:31:30.357150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.486 [2024-12-08 06:31:30.357181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.486 [2024-12-08 06:31:30.368648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.486 [2024-12-08 06:31:30.368676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.486 [2024-12-08 06:31:30.368707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.486 [2024-12-08 06:31:30.381604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.486 [2024-12-08 06:31:30.381631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.486 [2024-12-08 06:31:30.381662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.486 [2024-12-08 06:31:30.395840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.486 [2024-12-08 06:31:30.395868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.486 [2024-12-08 06:31:30.395890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.486 [2024-12-08 06:31:30.406817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.486 [2024-12-08 06:31:30.406846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.486 [2024-12-08 06:31:30.406862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.486 [2024-12-08 06:31:30.419614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.486 [2024-12-08 06:31:30.419642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.486 [2024-12-08 06:31:30.419672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.486 [2024-12-08 06:31:30.433542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.486 [2024-12-08 06:31:30.433569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.486 [2024-12-08 06:31:30.433601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.486 [2024-12-08 06:31:30.444333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.486 [2024-12-08 06:31:30.444360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.486 [2024-12-08 06:31:30.444391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.486 [2024-12-08 06:31:30.458406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.486 [2024-12-08 06:31:30.458433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.486 [2024-12-08 06:31:30.458464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.486 [2024-12-08 06:31:30.468637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.486 [2024-12-08 06:31:30.468663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.486 [2024-12-08 06:31:30.468692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.486 [2024-12-08 06:31:30.482427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.486 [2024-12-08 06:31:30.482456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.486 [2024-12-08 06:31:30.482487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.486 [2024-12-08 06:31:30.499039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.486 [2024-12-08 06:31:30.499083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.487 [2024-12-08 06:31:30.499100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.487 [2024-12-08 06:31:30.515406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.487 [2024-12-08 06:31:30.515441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.487 [2024-12-08 06:31:30.515473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.487 [2024-12-08 06:31:30.531651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.487 [2024-12-08 06:31:30.531705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.487 [2024-12-08 06:31:30.531731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.487 [2024-12-08 06:31:30.541480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.487 [2024-12-08 06:31:30.541508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.487 [2024-12-08 06:31:30.541539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.487 [2024-12-08 06:31:30.555049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.487 [2024-12-08 06:31:30.555078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.487 [2024-12-08 06:31:30.555093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.487 [2024-12-08 06:31:30.569562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.487 [2024-12-08 06:31:30.569590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.487 [2024-12-08 06:31:30.569621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.487 [2024-12-08 06:31:30.580275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.487 [2024-12-08 06:31:30.580304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.487 [2024-12-08 06:31:30.580336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.487 [2024-12-08 06:31:30.595195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.487 [2024-12-08 06:31:30.595223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.487 [2024-12-08 06:31:30.595254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.746 [2024-12-08 06:31:30.610468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.746 [2024-12-08 06:31:30.610498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.746 [2024-12-08 06:31:30.610530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.746 [2024-12-08 06:31:30.621272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.746 [2024-12-08 06:31:30.621301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.746 [2024-12-08 06:31:30.621333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.746 [2024-12-08 06:31:30.636797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.746 [2024-12-08 06:31:30.636826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.746 [2024-12-08 06:31:30.636843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.746 [2024-12-08 06:31:30.647471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.746 [2024-12-08 06:31:30.647499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.746 [2024-12-08 06:31:30.647530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.746 [2024-12-08 06:31:30.663945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.747 [2024-12-08 06:31:30.663974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.747 [2024-12-08 06:31:30.663991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.747 [2024-12-08 06:31:30.677750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.747 [2024-12-08 06:31:30.677780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.747 [2024-12-08 06:31:30.677797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.747 [2024-12-08 06:31:30.689024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.747 [2024-12-08 06:31:30.689053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.747 [2024-12-08 06:31:30.689083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.747 [2024-12-08 06:31:30.704649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.747 [2024-12-08 06:31:30.704680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.747 [2024-12-08 06:31:30.704713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.747 [2024-12-08 06:31:30.720160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.747 [2024-12-08 06:31:30.720189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.747 [2024-12-08 06:31:30.720220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.747 19204.00 IOPS, 75.02 MiB/s [2024-12-08T05:31:30.866Z] [2024-12-08 06:31:30.735555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.747 [2024-12-08 06:31:30.735584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.747 [2024-12-08 06:31:30.735600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.747 [2024-12-08 06:31:30.746335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.747 [2024-12-08 06:31:30.746371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.747 [2024-12-08 06:31:30.746403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.747 [2024-12-08 06:31:30.759146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.747 [2024-12-08 06:31:30.759175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.747 [2024-12-08 06:31:30.759207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.747 [2024-12-08 06:31:30.773186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.747 [2024-12-08 06:31:30.773228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.747 [2024-12-08 06:31:30.773245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.747 [2024-12-08 06:31:30.786498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.747 [2024-12-08 06:31:30.786541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.747 [2024-12-08 06:31:30.786558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.747 [2024-12-08 06:31:30.798001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.747 [2024-12-08 06:31:30.798045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.747 [2024-12-08 06:31:30.798061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.747 [2024-12-08 06:31:30.813192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.747 [2024-12-08 06:31:30.813222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.747 [2024-12-08 06:31:30.813253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.747 [2024-12-08 06:31:30.828346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.747 [2024-12-08 06:31:30.828374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.747 [2024-12-08 06:31:30.828404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.747 [2024-12-08 06:31:30.842121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.747 [2024-12-08 06:31:30.842149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.747 [2024-12-08 06:31:30.842180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.747 [2024-12-08 06:31:30.853833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:40.747 [2024-12-08 06:31:30.853862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.747 [2024-12-08 06:31:30.853879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:30.865792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:30.865824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:30.865842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:30.877851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:30.877883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:30.877901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:30.889589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:30.889618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:30.889657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:30.904007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:30.904050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:30.904066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:30.915640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:30.915670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:30.915701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:30.928906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:30.928936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:30.928953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:30.941343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:30.941372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:30.941402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:30.952091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:30.952120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:30.952156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:30.963423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:30.963451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:30.963488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:30.978311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:30.978340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:30.978370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:30.990040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:30.990069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:30.990085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:30.999984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:31.000015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:31.000046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:31.013297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:31.013325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:31.013356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:31.025943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:31.025973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:31.025990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:31.037190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:31.037220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:31.037252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:31.049074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:31.049102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:31.049134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:31.061317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:31.061345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:31.061376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:31.073323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:31.073356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:31.073388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:31.086003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:31.086046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:31.086062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:31.099667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:31.099695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:31.099736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.005 [2024-12-08 06:31:31.111295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.005 [2024-12-08 06:31:31.111325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-12-08 06:31:31.111358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.263 [2024-12-08 06:31:31.125961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.263 [2024-12-08 06:31:31.126000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.263 [2024-12-08 06:31:31.126018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.263 [2024-12-08 06:31:31.140610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.263 [2024-12-08 06:31:31.140640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.263 [2024-12-08 06:31:31.140672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.263 [2024-12-08 06:31:31.154733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.263 [2024-12-08 06:31:31.154779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.263 [2024-12-08 06:31:31.154798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.263 [2024-12-08 06:31:31.170610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.263 [2024-12-08 06:31:31.170640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.263 [2024-12-08 06:31:31.170671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.263 [2024-12-08 06:31:31.181599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.263 [2024-12-08 06:31:31.181627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.263 [2024-12-08 06:31:31.181659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.263 [2024-12-08 06:31:31.197524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.263 [2024-12-08 06:31:31.197553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.263 [2024-12-08 06:31:31.197585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.263 [2024-12-08 06:31:31.212131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.263 [2024-12-08 06:31:31.212159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.263 [2024-12-08 06:31:31.212191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.263 [2024-12-08 06:31:31.222664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.263 [2024-12-08 06:31:31.222692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.264 [2024-12-08 06:31:31.222730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.264 [2024-12-08 06:31:31.237114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.264 [2024-12-08 06:31:31.237142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.264 [2024-12-08 06:31:31.237174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.264 [2024-12-08 06:31:31.248053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.264 [2024-12-08 06:31:31.248080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.264 [2024-12-08 06:31:31.248111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.264 [2024-12-08 06:31:31.263194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.264 [2024-12-08 06:31:31.263222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.264 [2024-12-08 06:31:31.263254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.264 [2024-12-08 06:31:31.278152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.264 [2024-12-08 06:31:31.278180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.264 [2024-12-08 06:31:31.278211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.264 [2024-12-08 06:31:31.293382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.264 [2024-12-08 06:31:31.293419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.264 [2024-12-08 06:31:31.293451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.264 [2024-12-08 06:31:31.304808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.264 [2024-12-08 06:31:31.304839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.264 [2024-12-08 06:31:31.304864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.264 [2024-12-08 06:31:31.320809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.264 [2024-12-08 06:31:31.320838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.264 [2024-12-08 06:31:31.320854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.264 [2024-12-08 06:31:31.336700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.264 [2024-12-08 06:31:31.336749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.264 [2024-12-08 06:31:31.336768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.264 [2024-12-08 06:31:31.352446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.264 [2024-12-08 06:31:31.352473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.264 [2024-12-08 06:31:31.352504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.264 [2024-12-08 06:31:31.368469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.264 [2024-12-08 06:31:31.368498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.264 [2024-12-08 06:31:31.368529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.382844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.382878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.382895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.394415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.394446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.394477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.409897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.409926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.409942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.424514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.424557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.424574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.435504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.435532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.435563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.451333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.451361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.451391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.463369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.463396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.463427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.474572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.474599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.474630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.488461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.488488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.488519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.503302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.503329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.503359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.516529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.516556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.516587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.528691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.528718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.528757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.543311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.543339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.543375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.554169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.554197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.554228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.568474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.568501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.568532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.582869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.582897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.582913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.595870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.595897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.595913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.606033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.606060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.606075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.620119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.620147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.620177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.522 [2024-12-08 06:31:31.635309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.522 [2024-12-08 06:31:31.635336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.522 [2024-12-08 06:31:31.635366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.780 [2024-12-08 06:31:31.650981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.780 [2024-12-08 06:31:31.651012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.780 [2024-12-08 06:31:31.651029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.780 [2024-12-08 06:31:31.664779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.780 [2024-12-08 06:31:31.664813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.780 [2024-12-08 06:31:31.664829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.780 [2024-12-08 06:31:31.676294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.780 [2024-12-08 06:31:31.676322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.780 [2024-12-08 06:31:31.676353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.780 [2024-12-08 06:31:31.690553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.780 [2024-12-08 06:31:31.690581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.780 [2024-12-08 06:31:31.690611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.780 [2024-12-08 06:31:31.704746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.780 [2024-12-08 06:31:31.704774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.780 [2024-12-08 06:31:31.704804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.780 [2024-12-08 06:31:31.719665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614c50) 00:27:41.780 [2024-12-08 06:31:31.719692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.780 [2024-12-08 06:31:31.719731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.780 19178.50 IOPS, 74.92 MiB/s 00:27:41.780 Latency(us) 00:27:41.780 [2024-12-08T05:31:31.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.780 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:41.780 nvme0n1 : 2.04 18822.81 73.53 0.00 0.00 6660.16 3021.94 49710.27 00:27:41.780 [2024-12-08T05:31:31.899Z] =================================================================================================================== 00:27:41.780 [2024-12-08T05:31:31.899Z] Total : 18822.81 73.53 0.00 0.00 6660.16 3021.94 49710.27 00:27:41.780 { 00:27:41.780 "results": [ 00:27:41.780 { 00:27:41.780 "job": "nvme0n1", 00:27:41.780 "core_mask": "0x2", 00:27:41.780 "workload": "randread", 00:27:41.780 "status": "finished", 00:27:41.780 "queue_depth": 128, 00:27:41.780 "io_size": 4096, 00:27:41.780 "runtime": 2.044594, 00:27:41.780 "iops": 18822.80785329508, 00:27:41.780 "mibps": 73.5265931769339, 00:27:41.780 "io_failed": 0, 00:27:41.780 "io_timeout": 0, 00:27:41.780 "avg_latency_us": 6660.162893902868, 00:27:41.780 "min_latency_us": 3021.9377777777777, 00:27:41.780 "max_latency_us": 49710.26962962963 00:27:41.780 } 00:27:41.780 ], 00:27:41.780 "core_count": 1 00:27:41.780 } 00:27:41.780 06:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:41.780 06:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:41.780 06:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:41.780 06:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:41.780 | .driver_specific 00:27:41.780 | .nvme_error 00:27:41.780 | .status_code 00:27:41.780 | .command_transient_transport_error' 00:27:42.037 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 150 > 0 )) 00:27:42.037 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1172200 00:27:42.037 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1172200 ']' 00:27:42.037 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1172200 00:27:42.037 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:42.037 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:42.037 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1172200 00:27:42.037 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:42.037 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:42.037 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1172200' 00:27:42.037 killing process with pid 1172200 00:27:42.037 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1172200 00:27:42.037 Received shutdown signal, test time was about 2.000000 seconds 00:27:42.037 00:27:42.037 Latency(us) 00:27:42.037 [2024-12-08T05:31:32.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.037 [2024-12-08T05:31:32.156Z] =================================================================================================================== 00:27:42.037 [2024-12-08T05:31:32.156Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:42.037 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1172200 00:27:42.295 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:42.295 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:42.295 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:42.295 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:42.295 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:42.295 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1172645 00:27:42.295 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:42.295 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1172645 /var/tmp/bperf.sock 00:27:42.295 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1172645 ']' 00:27:42.295 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:42.295 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:42.295 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:42.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:42.295 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:42.295 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:42.295 [2024-12-08 06:31:32.364879] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:27:42.295 [2024-12-08 06:31:32.364954] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1172645 ] 00:27:42.295 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:42.295 Zero copy mechanism will not be used. 00:27:42.551 [2024-12-08 06:31:32.432758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.551 [2024-12-08 06:31:32.489972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.551 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:42.551 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:42.551 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:42.551 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:42.809 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:42.809 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.809 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:42.809 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.809 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:42.809 06:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:43.373 nvme0n1 00:27:43.373 06:31:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:43.373 06:31:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.373 06:31:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:43.373 06:31:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.373 06:31:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:43.373 06:31:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:43.631 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:43.631 Zero copy mechanism will not be used. 00:27:43.631 Running I/O for 2 seconds... 00:27:43.631 [2024-12-08 06:31:33.526237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.526300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.526322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.531651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.531680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.531712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.537119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.537147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.537178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.542472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.542499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.542530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.547848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.547876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.547893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.553161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.553189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.553219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.558533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.558571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.558603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.564779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.564820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.564837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.571268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.571297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.571328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.577675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.577718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.577744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.584568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.584595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.584626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.591468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.591496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.591534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.598709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.598762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.598780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.605436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.605464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.605494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.611605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.611632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.611663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.617820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.617861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.617878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.624120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.624148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.624179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.628677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.628705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.628749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.633588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.633629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.633647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.640200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.640227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.640259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.645939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.645969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.645985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.652082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.652109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.652140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.658902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.631 [2024-12-08 06:31:33.658937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.631 [2024-12-08 06:31:33.658953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.631 [2024-12-08 06:31:33.665890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.632 [2024-12-08 06:31:33.665922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-08 06:31:33.665939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.632 [2024-12-08 06:31:33.672762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.632 [2024-12-08 06:31:33.672790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-08 06:31:33.672807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.632 [2024-12-08 06:31:33.679523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.632 [2024-12-08 06:31:33.679551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-08 06:31:33.679581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.632 [2024-12-08 06:31:33.686097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.632 [2024-12-08 06:31:33.686124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-08 06:31:33.686154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.632 [2024-12-08 06:31:33.693114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.632 [2024-12-08 06:31:33.693142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-08 06:31:33.693173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.632 [2024-12-08 06:31:33.699753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.632 [2024-12-08 06:31:33.699792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-08 06:31:33.699816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.632 [2024-12-08 06:31:33.706167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.632 [2024-12-08 06:31:33.706195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-08 06:31:33.706226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.632 [2024-12-08 06:31:33.712981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.632 [2024-12-08 06:31:33.713016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-08 06:31:33.713046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.632 [2024-12-08 06:31:33.719515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.632 [2024-12-08 06:31:33.719543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-08 06:31:33.719577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.632 [2024-12-08 06:31:33.726161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.632 [2024-12-08 06:31:33.726197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-08 06:31:33.726228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.632 [2024-12-08 06:31:33.732782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.632 [2024-12-08 06:31:33.732822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-08 06:31:33.732839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.632 [2024-12-08 06:31:33.739092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.632 [2024-12-08 06:31:33.739119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-08 06:31:33.739149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.632 [2024-12-08 06:31:33.745781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.632 [2024-12-08 06:31:33.745812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.632 [2024-12-08 06:31:33.745829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.891 [2024-12-08 06:31:33.752655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.891 [2024-12-08 06:31:33.752685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.891 [2024-12-08 06:31:33.752705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.891 [2024-12-08 06:31:33.759861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.891 [2024-12-08 06:31:33.759913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.891 [2024-12-08 06:31:33.759931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.891 [2024-12-08 06:31:33.766474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.891 [2024-12-08 06:31:33.766502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.891 [2024-12-08 06:31:33.766533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.891 [2024-12-08 06:31:33.773339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.891 [2024-12-08 06:31:33.773390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.891 [2024-12-08 06:31:33.773407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.891 [2024-12-08 06:31:33.779832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.891 [2024-12-08 06:31:33.779869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.891 [2024-12-08 06:31:33.779886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.891 [2024-12-08 06:31:33.787160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.891 [2024-12-08 06:31:33.787189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.891 [2024-12-08 06:31:33.787220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.891 [2024-12-08 06:31:33.794217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.891 [2024-12-08 06:31:33.794246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.891 [2024-12-08 06:31:33.794276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.891 [2024-12-08 06:31:33.801246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.891 [2024-12-08 06:31:33.801283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.801314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.808186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.808224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.808255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.814954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.814983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.814999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.821925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.821965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.821981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.828309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.828336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.828367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.834631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.834658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.834688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.841789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.841819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.841836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.847769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.847797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.847813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.854277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.854320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.854337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.861034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.861078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.861094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.867240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.867282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.867299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.873263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.873294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.873331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.878995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.879027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.879058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.884746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.884779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.884795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.891469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.891496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.891528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.898567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.898595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.898626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.906316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.906344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.906374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.913938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.913968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.913984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.920626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.920654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.920686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.927386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.927414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.927445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.933778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.933812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.933829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.940380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.940407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.940437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.946869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.946897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.946913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.953517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.953545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.953575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.960116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.960144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.960175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.967164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.967205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.967221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.973478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.973505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.892 [2024-12-08 06:31:33.973535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.892 [2024-12-08 06:31:33.980184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.892 [2024-12-08 06:31:33.980212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.893 [2024-12-08 06:31:33.980243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.893 [2024-12-08 06:31:33.986988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.893 [2024-12-08 06:31:33.987016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.893 [2024-12-08 06:31:33.987031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.893 [2024-12-08 06:31:33.993865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.893 [2024-12-08 06:31:33.993894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.893 [2024-12-08 06:31:33.993910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.893 [2024-12-08 06:31:34.000604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.893 [2024-12-08 06:31:34.000630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.893 [2024-12-08 06:31:34.000661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.893 [2024-12-08 06:31:34.007443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:43.893 [2024-12-08 06:31:34.007473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.893 [2024-12-08 06:31:34.007490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.151 [2024-12-08 06:31:34.014626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.151 [2024-12-08 06:31:34.014665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.151 [2024-12-08 06:31:34.014694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.151 [2024-12-08 06:31:34.021862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.151 [2024-12-08 06:31:34.021894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.151 [2024-12-08 06:31:34.021910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.151 [2024-12-08 06:31:34.028712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.151 [2024-12-08 06:31:34.028760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.151 [2024-12-08 06:31:34.028778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.151 [2024-12-08 06:31:34.036277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.151 [2024-12-08 06:31:34.036306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.151 [2024-12-08 06:31:34.036337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.151 [2024-12-08 06:31:34.045380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.151 [2024-12-08 06:31:34.045409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.151 [2024-12-08 06:31:34.045441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.151 [2024-12-08 06:31:34.053115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.151 [2024-12-08 06:31:34.053143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.151 [2024-12-08 06:31:34.053181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.151 [2024-12-08 06:31:34.060404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.151 [2024-12-08 06:31:34.060432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.151 [2024-12-08 06:31:34.060465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.151 [2024-12-08 06:31:34.068286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.151 [2024-12-08 06:31:34.068314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.151 [2024-12-08 06:31:34.068345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.151 [2024-12-08 06:31:34.077131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.151 [2024-12-08 06:31:34.077159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.151 [2024-12-08 06:31:34.077190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.151 [2024-12-08 06:31:34.084423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.151 [2024-12-08 06:31:34.084466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.151 [2024-12-08 06:31:34.084483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.151 [2024-12-08 06:31:34.091594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.151 [2024-12-08 06:31:34.091622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.151 [2024-12-08 06:31:34.091653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.151 [2024-12-08 06:31:34.098781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.151 [2024-12-08 06:31:34.098810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.151 [2024-12-08 06:31:34.098826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.151 [2024-12-08 06:31:34.105463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.151 [2024-12-08 06:31:34.105490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.151 [2024-12-08 06:31:34.105520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.151 [2024-12-08 06:31:34.112271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.151 [2024-12-08 06:31:34.112312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.151 [2024-12-08 06:31:34.112328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.151 [2024-12-08 06:31:34.119396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.151 [2024-12-08 06:31:34.119428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.151 [2024-12-08 06:31:34.119459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.151 [2024-12-08 06:31:34.127081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.127109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.127140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.134759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.134788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.134804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.142244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.142272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.142303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.149813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.149842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.149859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.157266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.157293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.157325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.164602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.164629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.164660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.172046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.172075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.172091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.179521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.179548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.179579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.186623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.186663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.186680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.193340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.193368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.193399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.200058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.200085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.200116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.206684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.206710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.206750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.213758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.213801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.213817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.221794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.221824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.221841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.230730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.230759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.230775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.238177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.238204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.238235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.245193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.245220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.245256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.253403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.253430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.253461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.261927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.261957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.261974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.152 [2024-12-08 06:31:34.269450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.152 [2024-12-08 06:31:34.269483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.152 [2024-12-08 06:31:34.269511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.276054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.276084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.411 [2024-12-08 06:31:34.276117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.283383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.283426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.411 [2024-12-08 06:31:34.283443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.291181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.291211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.411 [2024-12-08 06:31:34.291243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.297885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.297919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.411 [2024-12-08 06:31:34.297937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.303918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.303948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.411 [2024-12-08 06:31:34.303965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.310291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.310327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.411 [2024-12-08 06:31:34.310359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.316106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.316135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.411 [2024-12-08 06:31:34.316166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.322158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.322186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.411 [2024-12-08 06:31:34.322216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.327825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.327853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.411 [2024-12-08 06:31:34.327869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.332878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.332908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.411 [2024-12-08 06:31:34.332924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.339419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.339446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.411 [2024-12-08 06:31:34.339477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.345596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.345624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.411 [2024-12-08 06:31:34.345654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.352485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.352513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.411 [2024-12-08 06:31:34.352545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.359089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.359118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.411 [2024-12-08 06:31:34.359149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.363362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.363390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.411 [2024-12-08 06:31:34.363421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.369996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.370026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.411 [2024-12-08 06:31:34.370043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.375944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.375976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.411 [2024-12-08 06:31:34.375994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.383244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.383275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.411 [2024-12-08 06:31:34.383307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.411 [2024-12-08 06:31:34.389716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.411 [2024-12-08 06:31:34.389755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.389789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.395951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.395983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.396015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.403460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.403491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.403522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.411087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.411117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.411148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.418861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.418895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.418922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.424969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.424998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.425027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.431218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.431246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.431277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.437806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.437835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.437852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.443814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.443842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.443859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.449312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.449354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.449371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.455306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.455335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.455366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.460691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.460719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.460759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.466836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.466866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.466882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.474240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.474269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.474300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.481144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.481173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.481205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.487179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.487207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.487238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.492430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.492472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.492490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.498489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.498522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.498555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.504324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.504352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.504385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.507815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.507843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.507875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.513819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.513849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.513883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.412 [2024-12-08 06:31:34.520548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.520578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.520618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.412 4619.00 IOPS, 577.38 MiB/s [2024-12-08T05:31:34.531Z] [2024-12-08 06:31:34.527884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.412 [2024-12-08 06:31:34.527915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.412 [2024-12-08 06:31:34.527953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.670 [2024-12-08 06:31:34.535168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.670 [2024-12-08 06:31:34.535199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.670 [2024-12-08 06:31:34.535232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.670 [2024-12-08 06:31:34.541793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.670 [2024-12-08 06:31:34.541823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.670 [2024-12-08 06:31:34.541856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.670 [2024-12-08 06:31:34.549041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.670 [2024-12-08 06:31:34.549088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.670 [2024-12-08 06:31:34.549106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.670 [2024-12-08 06:31:34.555825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.670 [2024-12-08 06:31:34.555856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.555890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.562997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.563041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.563057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.570359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.570404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.570422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.577818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.577847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.577880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.585395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.585435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.585468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.592674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.592727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.592748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.599254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.599284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.599317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.606526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.606557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.606589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.613792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.613838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.613856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.620660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.620690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.620730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.626798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.626827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.626859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.633599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.633643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.633661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.640786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.640816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.640849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.647608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.647638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.647678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.653924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.653954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.653997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.660135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.660163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.660194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.666529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.666558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.666589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.673361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.673393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.673425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.679323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.679351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.679383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.685629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.685657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.685689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.692166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.692194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.692225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.698944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.698974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.699016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.705976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.706007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.706025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.713045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.713075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.713092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.720317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.720345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.720377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.671 [2024-12-08 06:31:34.728360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.671 [2024-12-08 06:31:34.728389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.671 [2024-12-08 06:31:34.728420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.672 [2024-12-08 06:31:34.735968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.672 [2024-12-08 06:31:34.736013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.672 [2024-12-08 06:31:34.736031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.672 [2024-12-08 06:31:34.744536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.672 [2024-12-08 06:31:34.744564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.672 [2024-12-08 06:31:34.744596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.672 [2024-12-08 06:31:34.753385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.672 [2024-12-08 06:31:34.753414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.672 [2024-12-08 06:31:34.753446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.672 [2024-12-08 06:31:34.762222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.672 [2024-12-08 06:31:34.762252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.672 [2024-12-08 06:31:34.762284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.672 [2024-12-08 06:31:34.769823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.672 [2024-12-08 06:31:34.769865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.672 [2024-12-08 06:31:34.769898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.672 [2024-12-08 06:31:34.776872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.672 [2024-12-08 06:31:34.776902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.672 [2024-12-08 06:31:34.776934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.672 [2024-12-08 06:31:34.784215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.672 [2024-12-08 06:31:34.784258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.672 [2024-12-08 06:31:34.784275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.930 [2024-12-08 06:31:34.791565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.930 [2024-12-08 06:31:34.791597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.930 [2024-12-08 06:31:34.791630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.930 [2024-12-08 06:31:34.798590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.930 [2024-12-08 06:31:34.798621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.930 [2024-12-08 06:31:34.798653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.930 [2024-12-08 06:31:34.805499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.930 [2024-12-08 06:31:34.805528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.930 [2024-12-08 06:31:34.805561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.930 [2024-12-08 06:31:34.812406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.930 [2024-12-08 06:31:34.812435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.930 [2024-12-08 06:31:34.812467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.930 [2024-12-08 06:31:34.819366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.930 [2024-12-08 06:31:34.819394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.930 [2024-12-08 06:31:34.819426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.930 [2024-12-08 06:31:34.826085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.930 [2024-12-08 06:31:34.826114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.930 [2024-12-08 06:31:34.826145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.930 [2024-12-08 06:31:34.832977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.930 [2024-12-08 06:31:34.833006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.930 [2024-12-08 06:31:34.833038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.930 [2024-12-08 06:31:34.839706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.930 [2024-12-08 06:31:34.839757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.930 [2024-12-08 06:31:34.839774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.930 [2024-12-08 06:31:34.846587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.930 [2024-12-08 06:31:34.846614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.930 [2024-12-08 06:31:34.846646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.930 [2024-12-08 06:31:34.853671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.930 [2024-12-08 06:31:34.853715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.930 [2024-12-08 06:31:34.853740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.930 [2024-12-08 06:31:34.860036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.930 [2024-12-08 06:31:34.860080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.930 [2024-12-08 06:31:34.860096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.930 [2024-12-08 06:31:34.866149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.930 [2024-12-08 06:31:34.866178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.930 [2024-12-08 06:31:34.866209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.930 [2024-12-08 06:31:34.872196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.872225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.872257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.878384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.878426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.878444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.883623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.883658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.883690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.889078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.889105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.889136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.894802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.894832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.894865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.900553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.900581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.900613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.906595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.906623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.906654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.912741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.912784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.912810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.919003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.919047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.919064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.925654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.925682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.925714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.931996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.932024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.932039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.938811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.938842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.938874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.946140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.946169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.946200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.952716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.952751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.952783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.959394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.959431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.959462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.966188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.966215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.966246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.972916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.972945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.972976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.979687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.979737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.979755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.986451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.986479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.986511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.993256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:34.993284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:34.993323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:34.999963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:35.000008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:35.000025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:35.007000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:35.007043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:35.007060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:35.013990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:35.014033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:35.014049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:35.020908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:35.020937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:35.020970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:35.027758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.931 [2024-12-08 06:31:35.027787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.931 [2024-12-08 06:31:35.027818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.931 [2024-12-08 06:31:35.034587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.932 [2024-12-08 06:31:35.034614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.932 [2024-12-08 06:31:35.034645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.932 [2024-12-08 06:31:35.041267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.932 [2024-12-08 06:31:35.041295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.932 [2024-12-08 06:31:35.041326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.932 [2024-12-08 06:31:35.048038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:44.932 [2024-12-08 06:31:35.048077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.932 [2024-12-08 06:31:35.048122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.193 [2024-12-08 06:31:35.054946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.193 [2024-12-08 06:31:35.054985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.193 [2024-12-08 06:31:35.055023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.193 [2024-12-08 06:31:35.061785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.193 [2024-12-08 06:31:35.061819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.193 [2024-12-08 06:31:35.061838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.193 [2024-12-08 06:31:35.068530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.193 [2024-12-08 06:31:35.068558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.193 [2024-12-08 06:31:35.068590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.193 [2024-12-08 06:31:35.075487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.193 [2024-12-08 06:31:35.075515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.193 [2024-12-08 06:31:35.075547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.193 [2024-12-08 06:31:35.082861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.193 [2024-12-08 06:31:35.082890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.193 [2024-12-08 06:31:35.082922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.193 [2024-12-08 06:31:35.091203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.193 [2024-12-08 06:31:35.091232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.193 [2024-12-08 06:31:35.091265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.193 [2024-12-08 06:31:35.099522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.193 [2024-12-08 06:31:35.099550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.193 [2024-12-08 06:31:35.099581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.193 [2024-12-08 06:31:35.107524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.193 [2024-12-08 06:31:35.107553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.193 [2024-12-08 06:31:35.107584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.193 [2024-12-08 06:31:35.115135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.193 [2024-12-08 06:31:35.115164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.193 [2024-12-08 06:31:35.115195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.193 [2024-12-08 06:31:35.123543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.193 [2024-12-08 06:31:35.123570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.193 [2024-12-08 06:31:35.123601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.193 [2024-12-08 06:31:35.132300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.193 [2024-12-08 06:31:35.132343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.193 [2024-12-08 06:31:35.132361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.193 [2024-12-08 06:31:35.139880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.193 [2024-12-08 06:31:35.139910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.193 [2024-12-08 06:31:35.139942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.193 [2024-12-08 06:31:35.147115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.193 [2024-12-08 06:31:35.147143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.193 [2024-12-08 06:31:35.147174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.155695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.155748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.155768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.164041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.164070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.164102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.171384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.171412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.171443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.177822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.177851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.177884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.186436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.186480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.186505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.195385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.195412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.195444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.202912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.202940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.202972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.206881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.206910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.206943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.214017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.214060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.214076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.222780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.222825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.222843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.231132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.231161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.231194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.238982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.239027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.239044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.246952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.246982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.247014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.254789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.254829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.254862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.261556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.261585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.261616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.269003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.269048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.269065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.276008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.276052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.276068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.282986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.283029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.283045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.290269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.290299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.290331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.297336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.297365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.297397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.304399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.194 [2024-12-08 06:31:35.304427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.194 [2024-12-08 06:31:35.304466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.194 [2024-12-08 06:31:35.311448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.452 [2024-12-08 06:31:35.311480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.452 [2024-12-08 06:31:35.311515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.452 [2024-12-08 06:31:35.318442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.452 [2024-12-08 06:31:35.318473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.452 [2024-12-08 06:31:35.318506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.452 [2024-12-08 06:31:35.325728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.452 [2024-12-08 06:31:35.325759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.452 [2024-12-08 06:31:35.325792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.452 [2024-12-08 06:31:35.332826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.452 [2024-12-08 06:31:35.332857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.452 [2024-12-08 06:31:35.332874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.452 [2024-12-08 06:31:35.339981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.452 [2024-12-08 06:31:35.340010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.452 [2024-12-08 06:31:35.340043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.452 [2024-12-08 06:31:35.346691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.452 [2024-12-08 06:31:35.346741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.452 [2024-12-08 06:31:35.346758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.452 [2024-12-08 06:31:35.353386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.452 [2024-12-08 06:31:35.353414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.452 [2024-12-08 06:31:35.353445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.452 [2024-12-08 06:31:35.360555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.452 [2024-12-08 06:31:35.360583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.452 [2024-12-08 06:31:35.360615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.452 [2024-12-08 06:31:35.367663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.452 [2024-12-08 06:31:35.367691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.452 [2024-12-08 06:31:35.367730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.452 [2024-12-08 06:31:35.375264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.452 [2024-12-08 06:31:35.375294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.452 [2024-12-08 06:31:35.375332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.452 [2024-12-08 06:31:35.383827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.452 [2024-12-08 06:31:35.383856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.452 [2024-12-08 06:31:35.383888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.452 [2024-12-08 06:31:35.391847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.452 [2024-12-08 06:31:35.391876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.452 [2024-12-08 06:31:35.391909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.452 [2024-12-08 06:31:35.399765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.452 [2024-12-08 06:31:35.399796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.452 [2024-12-08 06:31:35.399829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.452 [2024-12-08 06:31:35.407789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.452 [2024-12-08 06:31:35.407819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.452 [2024-12-08 06:31:35.407852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.452 [2024-12-08 06:31:35.415193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.452 [2024-12-08 06:31:35.415221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.452 [2024-12-08 06:31:35.415252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.452 [2024-12-08 06:31:35.422839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.452 [2024-12-08 06:31:35.422880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.452 [2024-12-08 06:31:35.422912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.452 [2024-12-08 06:31:35.430511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.452 [2024-12-08 06:31:35.430548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.453 [2024-12-08 06:31:35.430579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.453 [2024-12-08 06:31:35.438510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.453 [2024-12-08 06:31:35.438551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.453 [2024-12-08 06:31:35.438581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.453 [2024-12-08 06:31:35.446157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.453 [2024-12-08 06:31:35.446187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.453 [2024-12-08 06:31:35.446219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.453 [2024-12-08 06:31:35.453711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.453 [2024-12-08 06:31:35.453777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.453 [2024-12-08 06:31:35.453795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.453 [2024-12-08 06:31:35.461358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.453 [2024-12-08 06:31:35.461392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.453 [2024-12-08 06:31:35.461424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.453 [2024-12-08 06:31:35.468787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.453 [2024-12-08 06:31:35.468816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.453 [2024-12-08 06:31:35.468848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.453 [2024-12-08 06:31:35.476336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.453 [2024-12-08 06:31:35.476365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.453 [2024-12-08 06:31:35.476396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.453 [2024-12-08 06:31:35.483773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.453 [2024-12-08 06:31:35.483802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.453 [2024-12-08 06:31:35.483834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.453 [2024-12-08 06:31:35.491097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.453 [2024-12-08 06:31:35.491125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.453 [2024-12-08 06:31:35.491156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.453 [2024-12-08 06:31:35.498025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.453 [2024-12-08 06:31:35.498067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.453 [2024-12-08 06:31:35.498082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.453 [2024-12-08 06:31:35.505123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.453 [2024-12-08 06:31:35.505166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.453 [2024-12-08 06:31:35.505190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.453 [2024-12-08 06:31:35.511090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.453 [2024-12-08 06:31:35.511118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.453 [2024-12-08 06:31:35.511150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.453 [2024-12-08 06:31:35.516547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.453 [2024-12-08 06:31:35.516575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.453 [2024-12-08 06:31:35.516606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.453 [2024-12-08 06:31:35.521974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb9e620) 00:27:45.453 [2024-12-08 06:31:35.522016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.453 [2024-12-08 06:31:35.522032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.453 4498.00 IOPS, 562.25 MiB/s 00:27:45.453 Latency(us) 00:27:45.453 [2024-12-08T05:31:35.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.453 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:45.453 nvme0n1 : 2.00 4499.06 562.38 0.00 0.00 3552.58 879.88 10874.12 00:27:45.453 [2024-12-08T05:31:35.572Z] =================================================================================================================== 00:27:45.453 [2024-12-08T05:31:35.572Z] Total : 4499.06 562.38 0.00 0.00 3552.58 879.88 10874.12 00:27:45.453 { 00:27:45.453 "results": [ 00:27:45.453 { 00:27:45.453 "job": "nvme0n1", 00:27:45.453 "core_mask": "0x2", 00:27:45.453 "workload": "randread", 00:27:45.453 "status": "finished", 00:27:45.453 "queue_depth": 16, 00:27:45.453 "io_size": 131072, 00:27:45.453 "runtime": 2.003085, 00:27:45.453 "iops": 4499.060199642052, 00:27:45.453 "mibps": 562.3825249552565, 00:27:45.453 "io_failed": 0, 00:27:45.453 "io_timeout": 0, 00:27:45.453 "avg_latency_us": 3552.5765270996694, 00:27:45.453 "min_latency_us": 879.8814814814815, 00:27:45.453 "max_latency_us": 10874.121481481481 00:27:45.453 } 00:27:45.453 ], 00:27:45.453 "core_count": 1 00:27:45.453 } 00:27:45.453 06:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:45.453 06:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:45.453 06:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:45.453 06:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:45.453 | .driver_specific 00:27:45.453 | .nvme_error 00:27:45.453 | .status_code 00:27:45.453 | .command_transient_transport_error' 00:27:45.710 06:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 291 > 0 )) 00:27:45.710 06:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1172645 00:27:45.710 06:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1172645 ']' 00:27:45.710 06:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1172645 00:27:45.710 06:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:45.710 06:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:45.710 06:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1172645 00:27:45.967 06:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:45.967 06:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:45.967 06:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1172645' 00:27:45.967 killing process with pid 1172645 00:27:45.967 06:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1172645 00:27:45.967 Received shutdown signal, test time was about 2.000000 seconds 00:27:45.967 00:27:45.967 Latency(us) 00:27:45.967 [2024-12-08T05:31:36.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.967 [2024-12-08T05:31:36.086Z] =================================================================================================================== 00:27:45.967 [2024-12-08T05:31:36.086Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:45.967 06:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1172645 00:27:46.224 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:46.224 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:46.224 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:46.224 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:46.224 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:46.224 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1173057 00:27:46.224 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:46.224 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1173057 /var/tmp/bperf.sock 00:27:46.224 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1173057 ']' 00:27:46.224 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:46.224 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:46.224 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:46.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:46.224 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:46.224 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:46.224 [2024-12-08 06:31:36.135278] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:27:46.224 [2024-12-08 06:31:36.135352] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173057 ] 00:27:46.224 [2024-12-08 06:31:36.201037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.224 [2024-12-08 06:31:36.255387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.481 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:46.481 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:46.481 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:46.481 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:46.738 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:46.738 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.738 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:46.738 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.738 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:46.738 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:46.995 nvme0n1 00:27:46.995 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:46.995 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.995 06:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:46.995 06:31:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.995 06:31:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:46.995 06:31:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:46.995 Running I/O for 2 seconds... 00:27:47.252 [2024-12-08 06:31:37.122978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.252 [2024-12-08 06:31:37.123217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.252 [2024-12-08 06:31:37.123263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.252 [2024-12-08 06:31:37.135544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.252 [2024-12-08 06:31:37.135773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.252 [2024-12-08 06:31:37.135801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.252 [2024-12-08 06:31:37.148393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.252 [2024-12-08 06:31:37.148562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.252 [2024-12-08 06:31:37.148589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.252 [2024-12-08 06:31:37.160864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.252 [2024-12-08 06:31:37.161080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.252 [2024-12-08 06:31:37.161107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.252 [2024-12-08 06:31:37.173120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.252 [2024-12-08 06:31:37.173341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.252 [2024-12-08 06:31:37.173382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.252 [2024-12-08 06:31:37.185188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.252 [2024-12-08 06:31:37.185383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.252 [2024-12-08 06:31:37.185412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.252 [2024-12-08 06:31:37.197160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.252 [2024-12-08 06:31:37.197380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.252 [2024-12-08 06:31:37.197406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.252 [2024-12-08 06:31:37.209161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.252 [2024-12-08 06:31:37.209393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.252 [2024-12-08 06:31:37.209420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.252 [2024-12-08 06:31:37.221214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.253 [2024-12-08 06:31:37.221448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.253 [2024-12-08 06:31:37.221474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.253 [2024-12-08 06:31:37.233298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.253 [2024-12-08 06:31:37.233479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.253 [2024-12-08 06:31:37.233506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.253 [2024-12-08 06:31:37.245394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.253 [2024-12-08 06:31:37.245625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.253 [2024-12-08 06:31:37.245651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.253 [2024-12-08 06:31:37.257413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.253 [2024-12-08 06:31:37.257578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.253 [2024-12-08 06:31:37.257604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.253 [2024-12-08 06:31:37.269316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.253 [2024-12-08 06:31:37.269481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.253 [2024-12-08 06:31:37.269506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.253 [2024-12-08 06:31:37.281216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.253 [2024-12-08 06:31:37.281457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.253 [2024-12-08 06:31:37.281483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.253 [2024-12-08 06:31:37.293414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.253 [2024-12-08 06:31:37.293650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.253 [2024-12-08 06:31:37.293675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.253 [2024-12-08 06:31:37.305302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.253 [2024-12-08 06:31:37.305492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.253 [2024-12-08 06:31:37.305518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.253 [2024-12-08 06:31:37.317250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.253 [2024-12-08 06:31:37.317434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.253 [2024-12-08 06:31:37.317461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.253 [2024-12-08 06:31:37.329162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.253 [2024-12-08 06:31:37.329393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.253 [2024-12-08 06:31:37.329419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.253 [2024-12-08 06:31:37.341400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.253 [2024-12-08 06:31:37.341560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.253 [2024-12-08 06:31:37.341586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.253 [2024-12-08 06:31:37.353325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.253 [2024-12-08 06:31:37.353488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.253 [2024-12-08 06:31:37.353512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.253 [2024-12-08 06:31:37.365226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.253 [2024-12-08 06:31:37.365450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.253 [2024-12-08 06:31:37.365475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.511 [2024-12-08 06:31:37.377415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.511 [2024-12-08 06:31:37.377588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.511 [2024-12-08 06:31:37.377618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.511 [2024-12-08 06:31:37.390304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.511 [2024-12-08 06:31:37.390530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.511 [2024-12-08 06:31:37.390557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.511 [2024-12-08 06:31:37.402332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.511 [2024-12-08 06:31:37.402512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.511 [2024-12-08 06:31:37.402539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.511 [2024-12-08 06:31:37.414244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.511 [2024-12-08 06:31:37.414478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.511 [2024-12-08 06:31:37.414503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.511 [2024-12-08 06:31:37.426153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.511 [2024-12-08 06:31:37.426335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.511 [2024-12-08 06:31:37.426361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.511 [2024-12-08 06:31:37.438045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.511 [2024-12-08 06:31:37.438266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.511 [2024-12-08 06:31:37.438291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.511 [2024-12-08 06:31:37.449947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.511 [2024-12-08 06:31:37.450202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.511 [2024-12-08 06:31:37.450227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.511 [2024-12-08 06:31:37.461715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.511 [2024-12-08 06:31:37.461946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.511 [2024-12-08 06:31:37.461972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.512 [2024-12-08 06:31:37.473586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.512 [2024-12-08 06:31:37.473759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.512 [2024-12-08 06:31:37.473786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.512 [2024-12-08 06:31:37.485508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.512 [2024-12-08 06:31:37.485673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.512 [2024-12-08 06:31:37.485717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.512 [2024-12-08 06:31:37.497365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.512 [2024-12-08 06:31:37.497521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.512 [2024-12-08 06:31:37.497546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.512 [2024-12-08 06:31:37.509328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.512 [2024-12-08 06:31:37.509511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.512 [2024-12-08 06:31:37.509536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.512 [2024-12-08 06:31:37.521213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.512 [2024-12-08 06:31:37.521384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.512 [2024-12-08 06:31:37.521409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.512 [2024-12-08 06:31:37.533166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.512 [2024-12-08 06:31:37.533331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.512 [2024-12-08 06:31:37.533356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.512 [2024-12-08 06:31:37.545078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.512 [2024-12-08 06:31:37.545300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.512 [2024-12-08 06:31:37.545326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.512 [2024-12-08 06:31:37.557000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.512 [2024-12-08 06:31:37.557235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.512 [2024-12-08 06:31:37.557260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.512 [2024-12-08 06:31:37.568912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.512 [2024-12-08 06:31:37.569156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.512 [2024-12-08 06:31:37.569182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.512 [2024-12-08 06:31:37.580841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.512 [2024-12-08 06:31:37.581048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.512 [2024-12-08 06:31:37.581073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.512 [2024-12-08 06:31:37.592772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.512 [2024-12-08 06:31:37.593015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.512 [2024-12-08 06:31:37.593060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.512 [2024-12-08 06:31:37.604747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.512 [2024-12-08 06:31:37.604934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.512 [2024-12-08 06:31:37.604960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.512 [2024-12-08 06:31:37.616631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.512 [2024-12-08 06:31:37.616841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.512 [2024-12-08 06:31:37.616867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.512 [2024-12-08 06:31:37.628652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.512 [2024-12-08 06:31:37.628909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.512 [2024-12-08 06:31:37.628940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.770 [2024-12-08 06:31:37.641363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.770 [2024-12-08 06:31:37.641527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.770 [2024-12-08 06:31:37.641554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.770 [2024-12-08 06:31:37.653535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.770 [2024-12-08 06:31:37.653768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.770 [2024-12-08 06:31:37.653795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.770 [2024-12-08 06:31:37.665464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.770 [2024-12-08 06:31:37.665622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.770 [2024-12-08 06:31:37.665647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.770 [2024-12-08 06:31:37.677384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.770 [2024-12-08 06:31:37.677607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.770 [2024-12-08 06:31:37.677632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.770 [2024-12-08 06:31:37.689307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.770 [2024-12-08 06:31:37.689464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.770 [2024-12-08 06:31:37.689490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.770 [2024-12-08 06:31:37.701223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.770 [2024-12-08 06:31:37.701442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.770 [2024-12-08 06:31:37.701467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.770 [2024-12-08 06:31:37.713334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.771 [2024-12-08 06:31:37.713492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.771 [2024-12-08 06:31:37.713517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.771 [2024-12-08 06:31:37.725456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.771 [2024-12-08 06:31:37.725620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.771 [2024-12-08 06:31:37.725645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.771 [2024-12-08 06:31:37.737590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.771 [2024-12-08 06:31:37.737775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.771 [2024-12-08 06:31:37.737802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.771 [2024-12-08 06:31:37.750129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.771 [2024-12-08 06:31:37.750296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.771 [2024-12-08 06:31:37.750321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.771 [2024-12-08 06:31:37.762369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.771 [2024-12-08 06:31:37.762521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.771 [2024-12-08 06:31:37.762545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.771 [2024-12-08 06:31:37.774129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.771 [2024-12-08 06:31:37.774369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.771 [2024-12-08 06:31:37.774394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.771 [2024-12-08 06:31:37.785978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.771 [2024-12-08 06:31:37.786201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.771 [2024-12-08 06:31:37.786226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.771 [2024-12-08 06:31:37.797796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.771 [2024-12-08 06:31:37.797954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.771 [2024-12-08 06:31:37.797984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.771 [2024-12-08 06:31:37.809600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.771 [2024-12-08 06:31:37.809822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.771 [2024-12-08 06:31:37.809848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.771 [2024-12-08 06:31:37.821575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.771 [2024-12-08 06:31:37.821744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.771 [2024-12-08 06:31:37.821769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.771 [2024-12-08 06:31:37.833533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.771 [2024-12-08 06:31:37.833684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.771 [2024-12-08 06:31:37.833730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.771 [2024-12-08 06:31:37.845411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.771 [2024-12-08 06:31:37.845560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.771 [2024-12-08 06:31:37.845591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.771 [2024-12-08 06:31:37.857310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.771 [2024-12-08 06:31:37.857463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.771 [2024-12-08 06:31:37.857488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.771 [2024-12-08 06:31:37.869156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.771 [2024-12-08 06:31:37.869321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.771 [2024-12-08 06:31:37.869346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:47.771 [2024-12-08 06:31:37.881049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:47.771 [2024-12-08 06:31:37.881310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.771 [2024-12-08 06:31:37.881337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:37.893857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.031 [2024-12-08 06:31:37.894045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.031 [2024-12-08 06:31:37.894073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:37.906036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.031 [2024-12-08 06:31:37.906225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.031 [2024-12-08 06:31:37.906256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:37.918414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.031 [2024-12-08 06:31:37.918607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.031 [2024-12-08 06:31:37.918633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:37.930594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.031 [2024-12-08 06:31:37.930804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.031 [2024-12-08 06:31:37.930832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:37.943273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.031 [2024-12-08 06:31:37.943503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.031 [2024-12-08 06:31:37.943530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:37.955841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.031 [2024-12-08 06:31:37.956088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.031 [2024-12-08 06:31:37.956114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:37.968180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.031 [2024-12-08 06:31:37.968340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.031 [2024-12-08 06:31:37.968366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:37.980156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.031 [2024-12-08 06:31:37.980308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.031 [2024-12-08 06:31:37.980333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:37.992078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.031 [2024-12-08 06:31:37.992228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.031 [2024-12-08 06:31:37.992253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:38.003835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.031 [2024-12-08 06:31:38.004037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.031 [2024-12-08 06:31:38.004062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:38.015574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.031 [2024-12-08 06:31:38.015764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.031 [2024-12-08 06:31:38.015790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:38.027712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.031 [2024-12-08 06:31:38.027940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.031 [2024-12-08 06:31:38.027966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:38.039990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.031 [2024-12-08 06:31:38.040230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.031 [2024-12-08 06:31:38.040255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:38.052297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.031 [2024-12-08 06:31:38.052526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.031 [2024-12-08 06:31:38.052550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:38.064194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.031 [2024-12-08 06:31:38.064363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.031 [2024-12-08 06:31:38.064388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:38.076079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.031 [2024-12-08 06:31:38.076236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.031 [2024-12-08 06:31:38.076261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:38.088226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.031 [2024-12-08 06:31:38.088411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.031 [2024-12-08 06:31:38.088436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:38.100337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.031 [2024-12-08 06:31:38.100531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.031 [2024-12-08 06:31:38.100557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.031 [2024-12-08 06:31:38.112220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.032 [2024-12-08 06:31:38.113238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.032 [2024-12-08 06:31:38.113265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.032 21120.00 IOPS, 82.50 MiB/s [2024-12-08T05:31:38.151Z] [2024-12-08 06:31:38.124253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.032 [2024-12-08 06:31:38.124474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.032 [2024-12-08 06:31:38.124499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.032 [2024-12-08 06:31:38.136452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.032 [2024-12-08 06:31:38.136697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.032 [2024-12-08 06:31:38.136745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.032 [2024-12-08 06:31:38.149351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.290 [2024-12-08 06:31:38.149613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.290 [2024-12-08 06:31:38.149642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.290 [2024-12-08 06:31:38.161536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.290 [2024-12-08 06:31:38.161779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.290 [2024-12-08 06:31:38.161808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.290 [2024-12-08 06:31:38.173464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.290 [2024-12-08 06:31:38.173695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.290 [2024-12-08 06:31:38.173744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.290 [2024-12-08 06:31:38.185375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.290 [2024-12-08 06:31:38.185555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.290 [2024-12-08 06:31:38.185581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.290 [2024-12-08 06:31:38.197222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.290 [2024-12-08 06:31:38.197435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.291 [2024-12-08 06:31:38.197461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.291 [2024-12-08 06:31:38.209369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.291 [2024-12-08 06:31:38.209576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.291 [2024-12-08 06:31:38.209613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.291 [2024-12-08 06:31:38.222081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.291 [2024-12-08 06:31:38.222314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.291 [2024-12-08 06:31:38.222340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.291 [2024-12-08 06:31:38.234515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.291 [2024-12-08 06:31:38.234748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.291 [2024-12-08 06:31:38.234776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.291 [2024-12-08 06:31:38.246896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.291 [2024-12-08 06:31:38.247152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.291 [2024-12-08 06:31:38.247178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.291 [2024-12-08 06:31:38.259270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.291 [2024-12-08 06:31:38.259507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.291 [2024-12-08 06:31:38.259532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.291 [2024-12-08 06:31:38.271569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.291 [2024-12-08 06:31:38.271802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.291 [2024-12-08 06:31:38.271829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.291 [2024-12-08 06:31:38.283870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.291 [2024-12-08 06:31:38.284116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.291 [2024-12-08 06:31:38.284142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.291 [2024-12-08 06:31:38.296159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.291 [2024-12-08 06:31:38.296387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.291 [2024-12-08 06:31:38.296413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.291 [2024-12-08 06:31:38.308650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.291 [2024-12-08 06:31:38.308901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.291 [2024-12-08 06:31:38.308927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.291 [2024-12-08 06:31:38.320938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.291 [2024-12-08 06:31:38.321179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.291 [2024-12-08 06:31:38.321205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.291 [2024-12-08 06:31:38.333274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.291 [2024-12-08 06:31:38.333500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.291 [2024-12-08 06:31:38.333525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.291 [2024-12-08 06:31:38.345589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.291 [2024-12-08 06:31:38.345813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.291 [2024-12-08 06:31:38.345840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.291 [2024-12-08 06:31:38.357848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.291 [2024-12-08 06:31:38.358082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.291 [2024-12-08 06:31:38.358108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.291 [2024-12-08 06:31:38.370097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.291 [2024-12-08 06:31:38.370330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.291 [2024-12-08 06:31:38.370356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.291 [2024-12-08 06:31:38.382370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.291 [2024-12-08 06:31:38.382594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.291 [2024-12-08 06:31:38.382620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.291 [2024-12-08 06:31:38.394645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.291 [2024-12-08 06:31:38.394900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.291 [2024-12-08 06:31:38.394927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.291 [2024-12-08 06:31:38.407771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.291 [2024-12-08 06:31:38.407930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.291 [2024-12-08 06:31:38.407968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.550 [2024-12-08 06:31:38.420669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.550 [2024-12-08 06:31:38.420866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.550 [2024-12-08 06:31:38.420896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.550 [2024-12-08 06:31:38.433278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.550 [2024-12-08 06:31:38.433515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.550 [2024-12-08 06:31:38.433548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.550 [2024-12-08 06:31:38.445959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.550 [2024-12-08 06:31:38.446210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.550 [2024-12-08 06:31:38.446237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.550 [2024-12-08 06:31:38.458291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.550 [2024-12-08 06:31:38.458521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.550 [2024-12-08 06:31:38.458547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.550 [2024-12-08 06:31:38.470548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.550 [2024-12-08 06:31:38.470783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.550 [2024-12-08 06:31:38.470811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.550 [2024-12-08 06:31:38.482879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.550 [2024-12-08 06:31:38.483130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.550 [2024-12-08 06:31:38.483156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.550 [2024-12-08 06:31:38.495163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.550 [2024-12-08 06:31:38.495385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.550 [2024-12-08 06:31:38.495411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.550 [2024-12-08 06:31:38.507437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.550 [2024-12-08 06:31:38.507661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.550 [2024-12-08 06:31:38.507687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.550 [2024-12-08 06:31:38.519689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.550 [2024-12-08 06:31:38.519943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.550 [2024-12-08 06:31:38.519970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.550 [2024-12-08 06:31:38.531994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.550 [2024-12-08 06:31:38.532234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.550 [2024-12-08 06:31:38.532260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.550 [2024-12-08 06:31:38.544314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.550 [2024-12-08 06:31:38.544555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.550 [2024-12-08 06:31:38.544581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.550 [2024-12-08 06:31:38.556615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.550 [2024-12-08 06:31:38.556872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.550 [2024-12-08 06:31:38.556899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.551 [2024-12-08 06:31:38.568912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.551 [2024-12-08 06:31:38.569162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.551 [2024-12-08 06:31:38.569188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.551 [2024-12-08 06:31:38.581202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.551 [2024-12-08 06:31:38.581431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.551 [2024-12-08 06:31:38.581457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.551 [2024-12-08 06:31:38.593481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.551 [2024-12-08 06:31:38.593719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.551 [2024-12-08 06:31:38.593767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.551 [2024-12-08 06:31:38.605808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.551 [2024-12-08 06:31:38.605993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.551 [2024-12-08 06:31:38.606033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.551 [2024-12-08 06:31:38.618095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.551 [2024-12-08 06:31:38.618335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.551 [2024-12-08 06:31:38.618361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.551 [2024-12-08 06:31:38.630334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.551 [2024-12-08 06:31:38.630570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.551 [2024-12-08 06:31:38.630595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.551 [2024-12-08 06:31:38.642607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.551 [2024-12-08 06:31:38.642881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.551 [2024-12-08 06:31:38.642909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.551 [2024-12-08 06:31:38.655192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.551 [2024-12-08 06:31:38.655444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.551 [2024-12-08 06:31:38.655487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.551 [2024-12-08 06:31:38.668320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.551 [2024-12-08 06:31:38.668550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.551 [2024-12-08 06:31:38.668578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.809 [2024-12-08 06:31:38.681002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.809 [2024-12-08 06:31:38.681254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.809 [2024-12-08 06:31:38.681282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.809 [2024-12-08 06:31:38.693288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.809 [2024-12-08 06:31:38.693516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.809 [2024-12-08 06:31:38.693543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.705526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.705764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.705791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.717874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.718116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.718141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.730172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.730403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.730429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.742439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.742667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.742693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.754824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.755066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.755101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.767169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.767399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.767424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.779458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.779691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.779740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.791751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.791980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.792020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.804009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.804264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.804290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.816301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.816529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.816554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.828563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.828789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.828817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.840824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.841060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.841086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.853142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.853367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.853393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.865410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.865634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.865666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.877689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.877875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.877906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.890017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.890229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.890255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.902270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.902435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.902461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.914971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.915186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.915212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.810 [2024-12-08 06:31:38.928167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:48.810 [2024-12-08 06:31:38.928345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.810 [2024-12-08 06:31:38.928383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.069 [2024-12-08 06:31:38.940678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:49.069 [2024-12-08 06:31:38.940851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.069 [2024-12-08 06:31:38.940881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.069 [2024-12-08 06:31:38.953143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:49.069 [2024-12-08 06:31:38.953292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.069 [2024-12-08 06:31:38.953318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.069 [2024-12-08 06:31:38.965526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:49.069 [2024-12-08 06:31:38.965676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.069 [2024-12-08 06:31:38.965716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.069 [2024-12-08 06:31:38.978082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:49.069 [2024-12-08 06:31:38.978233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.069 [2024-12-08 06:31:38.978258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.069 [2024-12-08 06:31:38.990437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:49.069 [2024-12-08 06:31:38.990619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.069 [2024-12-08 06:31:38.990645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.069 [2024-12-08 06:31:39.002671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:49.069 [2024-12-08 06:31:39.002837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.069 [2024-12-08 06:31:39.002864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.069 [2024-12-08 06:31:39.014963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:49.069 [2024-12-08 06:31:39.015226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.069 [2024-12-08 06:31:39.015251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.069 [2024-12-08 06:31:39.026863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:49.069 [2024-12-08 06:31:39.027048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.069 [2024-12-08 06:31:39.027089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.069 [2024-12-08 06:31:39.038796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:49.069 [2024-12-08 06:31:39.039106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.069 [2024-12-08 06:31:39.039136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.069 [2024-12-08 06:31:39.050593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:49.069 [2024-12-08 06:31:39.050871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.069 [2024-12-08 06:31:39.050899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.069 [2024-12-08 06:31:39.062649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:49.069 [2024-12-08 06:31:39.062833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.069 [2024-12-08 06:31:39.062859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.069 [2024-12-08 06:31:39.074481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:49.069 [2024-12-08 06:31:39.074641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.069 [2024-12-08 06:31:39.074670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.069 [2024-12-08 06:31:39.086432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:49.069 [2024-12-08 06:31:39.086590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.069 [2024-12-08 06:31:39.086615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.069 [2024-12-08 06:31:39.098424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:49.069 [2024-12-08 06:31:39.098622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.069 [2024-12-08 06:31:39.098649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.069 [2024-12-08 06:31:39.110815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37b20) with pdu=0x200016efda78 00:27:49.069 [2024-12-08 06:31:39.111029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.069 [2024-12-08 06:31:39.111070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.069 20927.00 IOPS, 81.75 MiB/s 00:27:49.069 Latency(us) 00:27:49.069 [2024-12-08T05:31:39.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.069 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:49.069 nvme0n1 : 2.01 20930.23 81.76 0.00 0.00 6102.96 2754.94 13107.20 00:27:49.069 [2024-12-08T05:31:39.188Z] =================================================================================================================== 00:27:49.069 [2024-12-08T05:31:39.188Z] Total : 20930.23 81.76 0.00 0.00 6102.96 2754.94 13107.20 00:27:49.069 { 00:27:49.069 "results": [ 00:27:49.069 { 00:27:49.069 "job": "nvme0n1", 00:27:49.069 "core_mask": "0x2", 00:27:49.069 "workload": "randwrite", 00:27:49.069 "status": "finished", 00:27:49.069 "queue_depth": 128, 00:27:49.069 "io_size": 4096, 00:27:49.069 "runtime": 2.007336, 00:27:49.069 "iops": 20930.22792397486, 00:27:49.069 "mibps": 81.7587028280268, 00:27:49.069 "io_failed": 0, 00:27:49.069 "io_timeout": 0, 00:27:49.069 "avg_latency_us": 6102.963234618443, 00:27:49.069 "min_latency_us": 2754.9392592592594, 00:27:49.069 "max_latency_us": 13107.2 00:27:49.069 } 00:27:49.069 ], 00:27:49.069 "core_count": 1 00:27:49.069 } 00:27:49.069 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:49.069 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:49.069 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:49.069 | .driver_specific 00:27:49.069 | .nvme_error 00:27:49.069 | .status_code 00:27:49.069 | .command_transient_transport_error' 00:27:49.069 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:49.326 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:27:49.326 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1173057 00:27:49.326 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1173057 ']' 00:27:49.326 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1173057 00:27:49.326 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:49.326 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.326 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1173057 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1173057' 00:27:49.583 killing process with pid 1173057 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1173057 00:27:49.583 Received shutdown signal, test time was about 2.000000 seconds 00:27:49.583 00:27:49.583 Latency(us) 00:27:49.583 [2024-12-08T05:31:39.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.583 [2024-12-08T05:31:39.702Z] =================================================================================================================== 00:27:49.583 [2024-12-08T05:31:39.702Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1173057 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1173461 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1173461 /var/tmp/bperf.sock 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1173461 ']' 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:49.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:49.583 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:49.841 [2024-12-08 06:31:39.733230] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:27:49.841 [2024-12-08 06:31:39.733303] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173461 ] 00:27:49.841 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:49.841 Zero copy mechanism will not be used. 00:27:49.841 [2024-12-08 06:31:39.798861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.841 [2024-12-08 06:31:39.853130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.841 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.841 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:49.841 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:49.841 06:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:50.407 06:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:50.407 06:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.407 06:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:50.407 06:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.407 06:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:50.407 06:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:50.665 nvme0n1 00:27:50.665 06:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:50.665 06:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.665 06:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:50.665 06:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.665 06:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:50.665 06:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:50.924 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:50.924 Zero copy mechanism will not be used. 00:27:50.924 Running I/O for 2 seconds... 00:27:50.924 [2024-12-08 06:31:40.821000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.924 [2024-12-08 06:31:40.821265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.924 [2024-12-08 06:31:40.821302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.924 [2024-12-08 06:31:40.827681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.924 [2024-12-08 06:31:40.827811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.924 [2024-12-08 06:31:40.827839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.924 [2024-12-08 06:31:40.833692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.924 [2024-12-08 06:31:40.833825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.924 [2024-12-08 06:31:40.833854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.924 [2024-12-08 06:31:40.840146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.924 [2024-12-08 06:31:40.840233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.924 [2024-12-08 06:31:40.840258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.924 [2024-12-08 06:31:40.846515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.924 [2024-12-08 06:31:40.846615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.924 [2024-12-08 06:31:40.846640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.924 [2024-12-08 06:31:40.853262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.924 [2024-12-08 06:31:40.853366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.924 [2024-12-08 06:31:40.853393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.924 [2024-12-08 06:31:40.859681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.924 [2024-12-08 06:31:40.859773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.924 [2024-12-08 06:31:40.859814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.924 [2024-12-08 06:31:40.866667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.924 [2024-12-08 06:31:40.866784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.924 [2024-12-08 06:31:40.866811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.924 [2024-12-08 06:31:40.874377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.924 [2024-12-08 06:31:40.874580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.924 [2024-12-08 06:31:40.874607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.924 [2024-12-08 06:31:40.882462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.924 [2024-12-08 06:31:40.882607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.924 [2024-12-08 06:31:40.882635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.924 [2024-12-08 06:31:40.889175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.924 [2024-12-08 06:31:40.889281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.924 [2024-12-08 06:31:40.889309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.924 [2024-12-08 06:31:40.895173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.924 [2024-12-08 06:31:40.895295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.924 [2024-12-08 06:31:40.895322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.924 [2024-12-08 06:31:40.901677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.924 [2024-12-08 06:31:40.901822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.924 [2024-12-08 06:31:40.901849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.924 [2024-12-08 06:31:40.908236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.924 [2024-12-08 06:31:40.908353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.924 [2024-12-08 06:31:40.908379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.924 [2024-12-08 06:31:40.914012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.924 [2024-12-08 06:31:40.914133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.924 [2024-12-08 06:31:40.914157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.924 [2024-12-08 06:31:40.919683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.924 [2024-12-08 06:31:40.919827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.924 [2024-12-08 06:31:40.919854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.924 [2024-12-08 06:31:40.926669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.924 [2024-12-08 06:31:40.926908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.924 [2024-12-08 06:31:40.926936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.924 [2024-12-08 06:31:40.933922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.925 [2024-12-08 06:31:40.934069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.925 [2024-12-08 06:31:40.934096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.925 [2024-12-08 06:31:40.941624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.925 [2024-12-08 06:31:40.941766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.925 [2024-12-08 06:31:40.941796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.925 [2024-12-08 06:31:40.949517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.925 [2024-12-08 06:31:40.949759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.925 [2024-12-08 06:31:40.949786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.925 [2024-12-08 06:31:40.957688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.925 [2024-12-08 06:31:40.957812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.925 [2024-12-08 06:31:40.957840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.925 [2024-12-08 06:31:40.964593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.925 [2024-12-08 06:31:40.964728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.925 [2024-12-08 06:31:40.964765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.925 [2024-12-08 06:31:40.970673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.925 [2024-12-08 06:31:40.970782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.925 [2024-12-08 06:31:40.970808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.925 [2024-12-08 06:31:40.976634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.925 [2024-12-08 06:31:40.976791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.925 [2024-12-08 06:31:40.976819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.925 [2024-12-08 06:31:40.983185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.925 [2024-12-08 06:31:40.983286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.925 [2024-12-08 06:31:40.983316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.925 [2024-12-08 06:31:40.989931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.925 [2024-12-08 06:31:40.990078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.925 [2024-12-08 06:31:40.990103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.925 [2024-12-08 06:31:40.998129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.925 [2024-12-08 06:31:40.998383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.925 [2024-12-08 06:31:40.998410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.925 [2024-12-08 06:31:41.005645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.925 [2024-12-08 06:31:41.005777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.925 [2024-12-08 06:31:41.005805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.925 [2024-12-08 06:31:41.011652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.925 [2024-12-08 06:31:41.011777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.925 [2024-12-08 06:31:41.011804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.925 [2024-12-08 06:31:41.017545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.925 [2024-12-08 06:31:41.017690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.925 [2024-12-08 06:31:41.017716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.925 [2024-12-08 06:31:41.023971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.925 [2024-12-08 06:31:41.024202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.925 [2024-12-08 06:31:41.024228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.925 [2024-12-08 06:31:41.031602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.925 [2024-12-08 06:31:41.031777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.925 [2024-12-08 06:31:41.031803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.925 [2024-12-08 06:31:41.038356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:50.925 [2024-12-08 06:31:41.038479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.925 [2024-12-08 06:31:41.038507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.185 [2024-12-08 06:31:41.045692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.185 [2024-12-08 06:31:41.045853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.185 [2024-12-08 06:31:41.045883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.185 [2024-12-08 06:31:41.052867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.185 [2024-12-08 06:31:41.053117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.185 [2024-12-08 06:31:41.053144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.185 [2024-12-08 06:31:41.060193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.185 [2024-12-08 06:31:41.060306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.060331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.067807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.068056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.068082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.075533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.075774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.075802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.082957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.083101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.083129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.089309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.089448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.089475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.095576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.095738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.095766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.101963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.102116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.102142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.109351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.109470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.109496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.118091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.118312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.118338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.126314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.126534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.126560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.133742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.133897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.133923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.140405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.140552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.140578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.146560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.146716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.146762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.152272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.152371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.152397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.158046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.158138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.158163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.163795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.163913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.163938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.169883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.169989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.170015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.177005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.177141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.177167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.183942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.184086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.184111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.190121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.190209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.190233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.195925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.196060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.196087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.202586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.202685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.202710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.208987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.209062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.209086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.215373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.215459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.215484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.221587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.221680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.221719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.227457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.227544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.227569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.233512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.186 [2024-12-08 06:31:41.233609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.186 [2024-12-08 06:31:41.233633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.186 [2024-12-08 06:31:41.239601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.187 [2024-12-08 06:31:41.239681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.187 [2024-12-08 06:31:41.239705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.187 [2024-12-08 06:31:41.245779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.187 [2024-12-08 06:31:41.245866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.187 [2024-12-08 06:31:41.245892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.187 [2024-12-08 06:31:41.251659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.187 [2024-12-08 06:31:41.251771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.187 [2024-12-08 06:31:41.251796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.187 [2024-12-08 06:31:41.257326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.187 [2024-12-08 06:31:41.257422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.187 [2024-12-08 06:31:41.257447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.187 [2024-12-08 06:31:41.262903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.187 [2024-12-08 06:31:41.262968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.187 [2024-12-08 06:31:41.262992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.187 [2024-12-08 06:31:41.268610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.187 [2024-12-08 06:31:41.268710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.187 [2024-12-08 06:31:41.268762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.187 [2024-12-08 06:31:41.274350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.187 [2024-12-08 06:31:41.274440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.187 [2024-12-08 06:31:41.274464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.187 [2024-12-08 06:31:41.279937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.187 [2024-12-08 06:31:41.280043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.187 [2024-12-08 06:31:41.280068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.187 [2024-12-08 06:31:41.285547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.187 [2024-12-08 06:31:41.285643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.187 [2024-12-08 06:31:41.285667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.187 [2024-12-08 06:31:41.291188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.187 [2024-12-08 06:31:41.291316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.187 [2024-12-08 06:31:41.291341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.187 [2024-12-08 06:31:41.296869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.187 [2024-12-08 06:31:41.296953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.187 [2024-12-08 06:31:41.296978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.187 [2024-12-08 06:31:41.303504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.187 [2024-12-08 06:31:41.303592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.187 [2024-12-08 06:31:41.303626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.309922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.309992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.310035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.316365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.316460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.316488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.323068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.323174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.323200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.329528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.329616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.329655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.336412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.336517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.336544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.342659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.342781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.342808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.348421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.348523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.348550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.353875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.353953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.353979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.359217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.359334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.359360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.365249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.365388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.365414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.372496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.372628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.372660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.379132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.379266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.379293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.386540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.386641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.386681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.394537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.394795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.394821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.402597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.402811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.402838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.410020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.410145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.410171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.417928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.418062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.418088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.426028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.426154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.426179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.434048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.434203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.434229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.447 [2024-12-08 06:31:41.442880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.447 [2024-12-08 06:31:41.442982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.447 [2024-12-08 06:31:41.443024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.448 [2024-12-08 06:31:41.451075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.448 [2024-12-08 06:31:41.451286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.448 [2024-12-08 06:31:41.451312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.448 [2024-12-08 06:31:41.459871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.448 [2024-12-08 06:31:41.460098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.448 [2024-12-08 06:31:41.460124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.448 [2024-12-08 06:31:41.469079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.448 [2024-12-08 06:31:41.469177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.448 [2024-12-08 06:31:41.469202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.448 [2024-12-08 06:31:41.478164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.448 [2024-12-08 06:31:41.478306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.448 [2024-12-08 06:31:41.478332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.448 [2024-12-08 06:31:41.485981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.448 [2024-12-08 06:31:41.486164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.448 [2024-12-08 06:31:41.486190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.448 [2024-12-08 06:31:41.493845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.448 [2024-12-08 06:31:41.493953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.448 [2024-12-08 06:31:41.493987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.448 [2024-12-08 06:31:41.501759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.448 [2024-12-08 06:31:41.501946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.448 [2024-12-08 06:31:41.501972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.448 [2024-12-08 06:31:41.509969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.448 [2024-12-08 06:31:41.510142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.448 [2024-12-08 06:31:41.510168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.448 [2024-12-08 06:31:41.518705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.448 [2024-12-08 06:31:41.518866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.448 [2024-12-08 06:31:41.518893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.448 [2024-12-08 06:31:41.528194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.448 [2024-12-08 06:31:41.528322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.448 [2024-12-08 06:31:41.528349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.448 [2024-12-08 06:31:41.536095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.448 [2024-12-08 06:31:41.536358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.448 [2024-12-08 06:31:41.536384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.448 [2024-12-08 06:31:41.542267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.448 [2024-12-08 06:31:41.542553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.448 [2024-12-08 06:31:41.542581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.448 [2024-12-08 06:31:41.548509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.448 [2024-12-08 06:31:41.548789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.448 [2024-12-08 06:31:41.548816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.448 [2024-12-08 06:31:41.554637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.448 [2024-12-08 06:31:41.554977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.448 [2024-12-08 06:31:41.555005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.448 [2024-12-08 06:31:41.561510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.448 [2024-12-08 06:31:41.561891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.448 [2024-12-08 06:31:41.561920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.568772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.569050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.569078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.575174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.575448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.575475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.581319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.581607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.581637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.588133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.588404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.588432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.594639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.594966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.594994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.601125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.601391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.601418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.607276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.607554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.607581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.613499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.613795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.613822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.619683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.619977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.620004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.625970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.626245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.626272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.632080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.632331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.632357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.639562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.639897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.639925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.646553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.646857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.646884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.653767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.654106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.654132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.660893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.661162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.661188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.667324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.667653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.667680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.673621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.673908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.673941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.679949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.680233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.680260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.686224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.686495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.686521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.692654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.692950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.692977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.698942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.699333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.699373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.705362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.705632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.705658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.711687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.711983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.712021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.717889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.718176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.718202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.724434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.708 [2024-12-08 06:31:41.724704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.708 [2024-12-08 06:31:41.724751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.708 [2024-12-08 06:31:41.730362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.709 [2024-12-08 06:31:41.730639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.709 [2024-12-08 06:31:41.730665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.709 [2024-12-08 06:31:41.736451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.709 [2024-12-08 06:31:41.736746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.709 [2024-12-08 06:31:41.736773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.709 [2024-12-08 06:31:41.742596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.709 [2024-12-08 06:31:41.742897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.709 [2024-12-08 06:31:41.742924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.709 [2024-12-08 06:31:41.748939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.709 [2024-12-08 06:31:41.749219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.709 [2024-12-08 06:31:41.749245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.709 [2024-12-08 06:31:41.755485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.709 [2024-12-08 06:31:41.755764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.709 [2024-12-08 06:31:41.755791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.709 [2024-12-08 06:31:41.762121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.709 [2024-12-08 06:31:41.762366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.709 [2024-12-08 06:31:41.762391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.709 [2024-12-08 06:31:41.768673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.709 [2024-12-08 06:31:41.768958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.709 [2024-12-08 06:31:41.768985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.709 [2024-12-08 06:31:41.775551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.709 [2024-12-08 06:31:41.775834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.709 [2024-12-08 06:31:41.775861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.709 [2024-12-08 06:31:41.781995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.709 [2024-12-08 06:31:41.782260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.709 [2024-12-08 06:31:41.782286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.709 [2024-12-08 06:31:41.788715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.709 [2024-12-08 06:31:41.789003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.709 [2024-12-08 06:31:41.789030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.709 [2024-12-08 06:31:41.795824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.709 [2024-12-08 06:31:41.796103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.709 [2024-12-08 06:31:41.796128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.709 [2024-12-08 06:31:41.802773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.709 [2024-12-08 06:31:41.803042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.709 [2024-12-08 06:31:41.803068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.709 [2024-12-08 06:31:41.809965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.709 [2024-12-08 06:31:41.810236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.709 [2024-12-08 06:31:41.810262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.709 4548.00 IOPS, 568.50 MiB/s [2024-12-08T05:31:41.828Z] [2024-12-08 06:31:41.817337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.709 [2024-12-08 06:31:41.817590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.709 [2024-12-08 06:31:41.817616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.709 [2024-12-08 06:31:41.823902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.709 [2024-12-08 06:31:41.824215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.709 [2024-12-08 06:31:41.824243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.830323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.830646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.830678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.836728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.837042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.837072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.843633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.843930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.843968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.850201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.850458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.850485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.856902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.857193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.857220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.863637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.863927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.863956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.870415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.870671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.870697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.877542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.877826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.877854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.884157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.884411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.884437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.890568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.890844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.890872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.898239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.898509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.898535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.906475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.906730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.906771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.913183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.913446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.913472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.919089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.919304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.919330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.925654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.925928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.925955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.932116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.932330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.932356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.938043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.938252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.938278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.943870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.944120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.944146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.949698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.949922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.949948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.955670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.955926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.955953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.961781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.962027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.962054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.969 [2024-12-08 06:31:41.968199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.969 [2024-12-08 06:31:41.968474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.969 [2024-12-08 06:31:41.968500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:41.974176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:41.974390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:41.974416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:41.979555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:41.979813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:41.979842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:41.984923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:41.985188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:41.985213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:41.990221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:41.990444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:41.990470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:41.995717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:41.995942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:41.995970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:42.001245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:42.001496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:42.001523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:42.006872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:42.007095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:42.007128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:42.012518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:42.012808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:42.012836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:42.018046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:42.018300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:42.018326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:42.023620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:42.023867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:42.023894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:42.029076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:42.029316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:42.029342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:42.034855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:42.035099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:42.035127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:42.040522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:42.040779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:42.040809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:42.046647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:42.046886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:42.046920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:42.052632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:42.052865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:42.052895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:42.059149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:42.059327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:42.059354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:42.066088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:42.066339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:42.066366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:42.072299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:42.072502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:42.072529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:42.077688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:42.077901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:42.077929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.970 [2024-12-08 06:31:42.083457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:51.970 [2024-12-08 06:31:42.083682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.970 [2024-12-08 06:31:42.083733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.230 [2024-12-08 06:31:42.089161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.230 [2024-12-08 06:31:42.089392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.230 [2024-12-08 06:31:42.089423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.230 [2024-12-08 06:31:42.094928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.230 [2024-12-08 06:31:42.095198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.230 [2024-12-08 06:31:42.095226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.230 [2024-12-08 06:31:42.100486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.230 [2024-12-08 06:31:42.100687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.230 [2024-12-08 06:31:42.100737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.230 [2024-12-08 06:31:42.106178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.230 [2024-12-08 06:31:42.106429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.230 [2024-12-08 06:31:42.106457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.230 [2024-12-08 06:31:42.112447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.230 [2024-12-08 06:31:42.112811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.230 [2024-12-08 06:31:42.112840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.230 [2024-12-08 06:31:42.117967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.230 [2024-12-08 06:31:42.118187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.230 [2024-12-08 06:31:42.118214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.230 [2024-12-08 06:31:42.123379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.230 [2024-12-08 06:31:42.123578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.230 [2024-12-08 06:31:42.123605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.230 [2024-12-08 06:31:42.128847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.230 [2024-12-08 06:31:42.129061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.230 [2024-12-08 06:31:42.129087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.230 [2024-12-08 06:31:42.134241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.230 [2024-12-08 06:31:42.134470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.230 [2024-12-08 06:31:42.134498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.230 [2024-12-08 06:31:42.139628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.230 [2024-12-08 06:31:42.139859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.230 [2024-12-08 06:31:42.139887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.230 [2024-12-08 06:31:42.144985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.230 [2024-12-08 06:31:42.145191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.230 [2024-12-08 06:31:42.145218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.230 [2024-12-08 06:31:42.151374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.230 [2024-12-08 06:31:42.151626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.230 [2024-12-08 06:31:42.151652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.230 [2024-12-08 06:31:42.156969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.230 [2024-12-08 06:31:42.157165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.230 [2024-12-08 06:31:42.157199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.230 [2024-12-08 06:31:42.162382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.230 [2024-12-08 06:31:42.162578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.230 [2024-12-08 06:31:42.162605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.230 [2024-12-08 06:31:42.167741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.230 [2024-12-08 06:31:42.167959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.230 [2024-12-08 06:31:42.167986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.230 [2024-12-08 06:31:42.173149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.230 [2024-12-08 06:31:42.173369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.230 [2024-12-08 06:31:42.173395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.230 [2024-12-08 06:31:42.180551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.181178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.181203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.186767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.186983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.187011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.192430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.192672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.192698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.198210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.198430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.198456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.203937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.204156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.204182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.209456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.209663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.209689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.215325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.215529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.215555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.220917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.221138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.221165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.226487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.226769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.226796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.233076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.233275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.233302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.239196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.239382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.239409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.245328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.245503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.245529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.251774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.251991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.252031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.257814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.258045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.258071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.263758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.263976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.264004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.269362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.269564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.269590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.275756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.275961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.275988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.281437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.281632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.281659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.287194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.287406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.287432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.293139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.293334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.293360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.298561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.298770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.298798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.303767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.304054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.304081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.309813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.310054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.310088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.315715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.315931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.315959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.321081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.321296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.321325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.326315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.326521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.326547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.231 [2024-12-08 06:31:42.331641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.231 [2024-12-08 06:31:42.331927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.231 [2024-12-08 06:31:42.331956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.232 [2024-12-08 06:31:42.337693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.232 [2024-12-08 06:31:42.338040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.232 [2024-12-08 06:31:42.338069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.232 [2024-12-08 06:31:42.344316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.232 [2024-12-08 06:31:42.344633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.232 [2024-12-08 06:31:42.344663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.491 [2024-12-08 06:31:42.349936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.491 [2024-12-08 06:31:42.350189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.491 [2024-12-08 06:31:42.350218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.491 [2024-12-08 06:31:42.355572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.491 [2024-12-08 06:31:42.355931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.491 [2024-12-08 06:31:42.355962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.491 [2024-12-08 06:31:42.361076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.491 [2024-12-08 06:31:42.361371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.491 [2024-12-08 06:31:42.361399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.491 [2024-12-08 06:31:42.366630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.491 [2024-12-08 06:31:42.366974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.491 [2024-12-08 06:31:42.367004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.491 [2024-12-08 06:31:42.372389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.491 [2024-12-08 06:31:42.372760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.491 [2024-12-08 06:31:42.372804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.491 [2024-12-08 06:31:42.378453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.491 [2024-12-08 06:31:42.378785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.491 [2024-12-08 06:31:42.378814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.491 [2024-12-08 06:31:42.384099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.491 [2024-12-08 06:31:42.384330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.491 [2024-12-08 06:31:42.384357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.491 [2024-12-08 06:31:42.389633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.491 [2024-12-08 06:31:42.389950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.491 [2024-12-08 06:31:42.389979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.491 [2024-12-08 06:31:42.395048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.491 [2024-12-08 06:31:42.395262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.491 [2024-12-08 06:31:42.395289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.491 [2024-12-08 06:31:42.400533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.491 [2024-12-08 06:31:42.400879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.491 [2024-12-08 06:31:42.400908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.491 [2024-12-08 06:31:42.406105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.491 [2024-12-08 06:31:42.406433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.491 [2024-12-08 06:31:42.406460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.491 [2024-12-08 06:31:42.411592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.491 [2024-12-08 06:31:42.411906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.491 [2024-12-08 06:31:42.411935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.491 [2024-12-08 06:31:42.417143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.491 [2024-12-08 06:31:42.417349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.491 [2024-12-08 06:31:42.417376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.491 [2024-12-08 06:31:42.422593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.491 [2024-12-08 06:31:42.422832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.491 [2024-12-08 06:31:42.422859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.427913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.428149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.428175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.433768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.434007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.434049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.439543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.439863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.439892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.445214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.445411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.445437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.451056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.451264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.451292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.456808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.457016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.457055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.462435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.462742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.462771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.467818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.468065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.468091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.473181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.473515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.473542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.478553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.478878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.478905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.484096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.484292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.484318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.489512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.489783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.489811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.494926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.495232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.495258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.500235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.500569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.500595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.505256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.505476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.505503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.510553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.510894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.510922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.515984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.516219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.516246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.521585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.521810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.521839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.527197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.527534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.527562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.532690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.533012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.533055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.538917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.539132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.539158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.544079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.544297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.544323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.549349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.549553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.549580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.554579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.554832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.554861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.559798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.560038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.560065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.564931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.565173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.492 [2024-12-08 06:31:42.565200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.492 [2024-12-08 06:31:42.570293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.492 [2024-12-08 06:31:42.570513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.493 [2024-12-08 06:31:42.570540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.493 [2024-12-08 06:31:42.576822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.493 [2024-12-08 06:31:42.577106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.493 [2024-12-08 06:31:42.577132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.493 [2024-12-08 06:31:42.582844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.493 [2024-12-08 06:31:42.583078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.493 [2024-12-08 06:31:42.583105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.493 [2024-12-08 06:31:42.588266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.493 [2024-12-08 06:31:42.588476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.493 [2024-12-08 06:31:42.588503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.493 [2024-12-08 06:31:42.593774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.493 [2024-12-08 06:31:42.593984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.493 [2024-12-08 06:31:42.594011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.493 [2024-12-08 06:31:42.599253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.493 [2024-12-08 06:31:42.599517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.493 [2024-12-08 06:31:42.599554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.493 [2024-12-08 06:31:42.605051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.493 [2024-12-08 06:31:42.605314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.493 [2024-12-08 06:31:42.605344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.610861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.611123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.611153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.616470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.616678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.616729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.621908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.622156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.622187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.627844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.628093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.628123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.633482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.633683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.633733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.639153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.639316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.639342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.644640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.644857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.644885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.650052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.650267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.650295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.655436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.655672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.655698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.660749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.660987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.661014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.666859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.667116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.667142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.673110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.673316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.673343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.678406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.678610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.678636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.683755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.683990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.684017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.688999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.689209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.689236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.694294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.694498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.694524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.699557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.699789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.699816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.704811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.705021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.705061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.710274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.752 [2024-12-08 06:31:42.710504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.752 [2024-12-08 06:31:42.710531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.752 [2024-12-08 06:31:42.716059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.716363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.716389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.753 [2024-12-08 06:31:42.722157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.722474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.722500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.753 [2024-12-08 06:31:42.727866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.728082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.728109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.753 [2024-12-08 06:31:42.733828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.734050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.734077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.753 [2024-12-08 06:31:42.739733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.740051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.740077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.753 [2024-12-08 06:31:42.745589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.745872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.745908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.753 [2024-12-08 06:31:42.751442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.751674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.751700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.753 [2024-12-08 06:31:42.757294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.757514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.757540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.753 [2024-12-08 06:31:42.763263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.763519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.763545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.753 [2024-12-08 06:31:42.769527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.769744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.769771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.753 [2024-12-08 06:31:42.775095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.775296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.775322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.753 [2024-12-08 06:31:42.780469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.780668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.780694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.753 [2024-12-08 06:31:42.785880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.786084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.786111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.753 [2024-12-08 06:31:42.791284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.791484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.791510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.753 [2024-12-08 06:31:42.796680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.796905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.796932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.753 [2024-12-08 06:31:42.802191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.802397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.802423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.753 [2024-12-08 06:31:42.807772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.807952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.807979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.753 [2024-12-08 06:31:42.814157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.814352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.814378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.753 4952.00 IOPS, 619.00 MiB/s [2024-12-08T05:31:42.872Z] [2024-12-08 06:31:42.821317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c37e60) with pdu=0x200016eff3c8 00:27:52.753 [2024-12-08 06:31:42.821405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.753 [2024-12-08 06:31:42.821430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.753 00:27:52.753 Latency(us) 00:27:52.753 [2024-12-08T05:31:42.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.753 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:52.753 nvme0n1 : 2.00 4949.29 618.66 0.00 0.00 3224.74 2305.90 12427.57 00:27:52.753 [2024-12-08T05:31:42.872Z] =================================================================================================================== 00:27:52.753 [2024-12-08T05:31:42.872Z] Total : 4949.29 618.66 0.00 0.00 3224.74 2305.90 12427.57 00:27:52.753 { 00:27:52.753 "results": [ 00:27:52.753 { 00:27:52.753 "job": "nvme0n1", 00:27:52.753 "core_mask": "0x2", 00:27:52.753 "workload": "randwrite", 00:27:52.753 "status": "finished", 00:27:52.753 "queue_depth": 16, 00:27:52.753 "io_size": 131072, 00:27:52.753 "runtime": 2.004935, 00:27:52.753 "iops": 4949.28763276615, 00:27:52.753 "mibps": 618.6609540957687, 00:27:52.753 "io_failed": 0, 00:27:52.753 "io_timeout": 0, 00:27:52.753 "avg_latency_us": 3224.736719853987, 00:27:52.753 "min_latency_us": 2305.8962962962964, 00:27:52.753 "max_latency_us": 12427.567407407407 00:27:52.753 } 00:27:52.753 ], 00:27:52.753 "core_count": 1 00:27:52.753 } 00:27:52.753 06:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:52.753 06:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:52.753 06:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:52.753 | .driver_specific 00:27:52.753 | .nvme_error 00:27:52.753 | .status_code 00:27:52.753 | .command_transient_transport_error' 00:27:52.753 06:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:53.011 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 321 > 0 )) 00:27:53.011 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1173461 00:27:53.011 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1173461 ']' 00:27:53.011 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1173461 00:27:53.011 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:53.011 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:53.011 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1173461 00:27:53.268 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:53.268 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:53.268 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1173461' 00:27:53.268 killing process with pid 1173461 00:27:53.268 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1173461 00:27:53.268 Received shutdown signal, test time was about 2.000000 seconds 00:27:53.268 00:27:53.268 Latency(us) 00:27:53.268 [2024-12-08T05:31:43.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.268 [2024-12-08T05:31:43.387Z] =================================================================================================================== 00:27:53.268 [2024-12-08T05:31:43.387Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:53.268 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1173461 00:27:53.268 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1172095 00:27:53.268 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1172095 ']' 00:27:53.268 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1172095 00:27:53.268 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:53.268 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:53.268 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1172095 00:27:53.527 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:53.527 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:53.527 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1172095' 00:27:53.527 killing process with pid 1172095 00:27:53.527 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1172095 00:27:53.527 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1172095 00:27:53.527 00:27:53.527 real 0m15.414s 00:27:53.527 user 0m30.103s 00:27:53.527 sys 0m5.200s 00:27:53.527 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:53.527 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:53.527 ************************************ 00:27:53.527 END TEST nvmf_digest_error 00:27:53.527 ************************************ 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:53.787 rmmod nvme_tcp 00:27:53.787 rmmod nvme_fabrics 00:27:53.787 rmmod nvme_keyring 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1172095 ']' 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1172095 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1172095 ']' 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1172095 00:27:53.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1172095) - No such process 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1172095 is not found' 00:27:53.787 Process with pid 1172095 is not found 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.787 06:31:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.689 06:31:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:55.689 00:27:55.689 real 0m36.095s 00:27:55.689 user 1m2.427s 00:27:55.689 sys 0m12.087s 00:27:55.689 06:31:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:55.689 06:31:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:55.689 ************************************ 00:27:55.689 END TEST nvmf_digest 00:27:55.689 ************************************ 00:27:55.689 06:31:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:55.689 06:31:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:55.689 06:31:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:55.689 06:31:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:55.689 06:31:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:55.689 06:31:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.689 06:31:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.948 ************************************ 00:27:55.948 START TEST nvmf_bdevperf 00:27:55.948 ************************************ 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:55.948 * Looking for test storage... 00:27:55.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:55.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.948 --rc genhtml_branch_coverage=1 00:27:55.948 --rc genhtml_function_coverage=1 00:27:55.948 --rc genhtml_legend=1 00:27:55.948 --rc geninfo_all_blocks=1 00:27:55.948 --rc geninfo_unexecuted_blocks=1 00:27:55.948 00:27:55.948 ' 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:55.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.948 --rc genhtml_branch_coverage=1 00:27:55.948 --rc genhtml_function_coverage=1 00:27:55.948 --rc genhtml_legend=1 00:27:55.948 --rc geninfo_all_blocks=1 00:27:55.948 --rc geninfo_unexecuted_blocks=1 00:27:55.948 00:27:55.948 ' 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:55.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.948 --rc genhtml_branch_coverage=1 00:27:55.948 --rc genhtml_function_coverage=1 00:27:55.948 --rc genhtml_legend=1 00:27:55.948 --rc geninfo_all_blocks=1 00:27:55.948 --rc geninfo_unexecuted_blocks=1 00:27:55.948 00:27:55.948 ' 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:55.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.948 --rc genhtml_branch_coverage=1 00:27:55.948 --rc genhtml_function_coverage=1 00:27:55.948 --rc genhtml_legend=1 00:27:55.948 --rc geninfo_all_blocks=1 00:27:55.948 --rc geninfo_unexecuted_blocks=1 00:27:55.948 00:27:55.948 ' 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.948 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:55.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:55.949 06:31:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.484 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:58.485 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:58.485 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:58.485 Found net devices under 0000:84:00.0: cvl_0_0 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:58.485 Found net devices under 0000:84:00.1: cvl_0_1 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:58.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:27:58.485 00:27:58.485 --- 10.0.0.2 ping statistics --- 00:27:58.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.485 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:27:58.485 00:27:58.485 --- 10.0.0.1 ping statistics --- 00:27:58.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.485 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1175960 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1175960 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1175960 ']' 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:58.485 [2024-12-08 06:31:48.344277] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:27:58.485 [2024-12-08 06:31:48.344353] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.485 [2024-12-08 06:31:48.417896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:58.485 [2024-12-08 06:31:48.473936] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.485 [2024-12-08 06:31:48.474000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.485 [2024-12-08 06:31:48.474028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.485 [2024-12-08 06:31:48.474040] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.485 [2024-12-08 06:31:48.474049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.485 [2024-12-08 06:31:48.475654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.485 [2024-12-08 06:31:48.475755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:58.485 [2024-12-08 06:31:48.475760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:58.485 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:58.747 [2024-12-08 06:31:48.612346] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:58.747 Malloc0 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:58.747 [2024-12-08 06:31:48.676349] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:58.747 { 00:27:58.747 "params": { 00:27:58.747 "name": "Nvme$subsystem", 00:27:58.747 "trtype": "$TEST_TRANSPORT", 00:27:58.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.747 "adrfam": "ipv4", 00:27:58.747 "trsvcid": "$NVMF_PORT", 00:27:58.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.747 "hdgst": ${hdgst:-false}, 00:27:58.747 "ddgst": ${ddgst:-false} 00:27:58.747 }, 00:27:58.747 "method": "bdev_nvme_attach_controller" 00:27:58.747 } 00:27:58.747 EOF 00:27:58.747 )") 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:58.747 06:31:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:58.747 "params": { 00:27:58.747 "name": "Nvme1", 00:27:58.747 "trtype": "tcp", 00:27:58.747 "traddr": "10.0.0.2", 00:27:58.747 "adrfam": "ipv4", 00:27:58.747 "trsvcid": "4420", 00:27:58.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:58.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:58.747 "hdgst": false, 00:27:58.747 "ddgst": false 00:27:58.747 }, 00:27:58.747 "method": "bdev_nvme_attach_controller" 00:27:58.747 }' 00:27:58.747 [2024-12-08 06:31:48.728522] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:27:58.747 [2024-12-08 06:31:48.728590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1175987 ] 00:27:58.747 [2024-12-08 06:31:48.796753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.747 [2024-12-08 06:31:48.857927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.317 Running I/O for 1 seconds... 00:28:00.270 8612.00 IOPS, 33.64 MiB/s 00:28:00.270 Latency(us) 00:28:00.270 [2024-12-08T05:31:50.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.270 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:00.270 Verification LBA range: start 0x0 length 0x4000 00:28:00.270 Nvme1n1 : 1.01 8658.47 33.82 0.00 0.00 14717.11 3034.07 16699.54 00:28:00.270 [2024-12-08T05:31:50.389Z] =================================================================================================================== 00:28:00.270 [2024-12-08T05:31:50.389Z] Total : 8658.47 33.82 0.00 0.00 14717.11 3034.07 16699.54 00:28:00.528 06:31:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1176176 00:28:00.528 06:31:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:00.528 06:31:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:00.528 06:31:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:00.528 06:31:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:00.528 06:31:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:00.528 06:31:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:00.528 06:31:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:00.528 { 00:28:00.528 "params": { 00:28:00.528 "name": "Nvme$subsystem", 00:28:00.528 "trtype": "$TEST_TRANSPORT", 00:28:00.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.528 "adrfam": "ipv4", 00:28:00.528 "trsvcid": "$NVMF_PORT", 00:28:00.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.528 "hdgst": ${hdgst:-false}, 00:28:00.528 "ddgst": ${ddgst:-false} 00:28:00.528 }, 00:28:00.528 "method": "bdev_nvme_attach_controller" 00:28:00.528 } 00:28:00.528 EOF 00:28:00.528 )") 00:28:00.528 06:31:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:00.528 06:31:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:00.528 06:31:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:00.528 06:31:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:00.528 "params": { 00:28:00.528 "name": "Nvme1", 00:28:00.528 "trtype": "tcp", 00:28:00.528 "traddr": "10.0.0.2", 00:28:00.528 "adrfam": "ipv4", 00:28:00.528 "trsvcid": "4420", 00:28:00.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:00.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:00.528 "hdgst": false, 00:28:00.528 "ddgst": false 00:28:00.528 }, 00:28:00.528 "method": "bdev_nvme_attach_controller" 00:28:00.528 }' 00:28:00.528 [2024-12-08 06:31:50.488019] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:28:00.528 [2024-12-08 06:31:50.488130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1176176 ] 00:28:00.528 [2024-12-08 06:31:50.560417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.528 [2024-12-08 06:31:50.619118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.786 Running I/O for 15 seconds... 00:28:02.748 8581.00 IOPS, 33.52 MiB/s [2024-12-08T05:31:53.815Z] 8643.00 IOPS, 33.76 MiB/s [2024-12-08T05:31:53.815Z] 06:31:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1175960 00:28:03.696 06:31:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:03.696 [2024-12-08 06:31:53.455160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.696 [2024-12-08 06:31:53.455211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.696 [2024-12-08 06:31:53.455255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.696 [2024-12-08 06:31:53.455273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.696 [2024-12-08 06:31:53.455289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.696 [2024-12-08 06:31:53.455303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.696 [2024-12-08 06:31:53.455318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.696 [2024-12-08 06:31:53.455331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.696 [2024-12-08 06:31:53.455346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.696 [2024-12-08 06:31:53.455361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.696 [2024-12-08 06:31:53.455378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.696 [2024-12-08 06:31:53.455392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.696 [2024-12-08 06:31:53.455407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.696 [2024-12-08 06:31:53.455421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.696 [2024-12-08 06:31:53.455437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.696 [2024-12-08 06:31:53.455449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.696 [2024-12-08 06:31:53.455464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.696 [2024-12-08 06:31:53.455489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.696 [2024-12-08 06:31:53.455505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.696 [2024-12-08 06:31:53.455518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.696 [2024-12-08 06:31:53.455533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.696 [2024-12-08 06:31:53.455545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.696 [2024-12-08 06:31:53.455558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.696 [2024-12-08 06:31:53.455573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.696 [2024-12-08 06:31:53.455589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.696 [2024-12-08 06:31:53.455605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.696 [2024-12-08 06:31:53.455620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.455635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.455650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.455666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.455682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.455697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.455737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.455753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.455768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.455783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.455799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.455813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.455828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.455842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.455857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.455871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.455891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.455906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.455922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.455936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.455951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.455965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.455990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.697 [2024-12-08 06:31:53.456799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.697 [2024-12-08 06:31:53.456813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.456830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.456844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.456859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.456873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.456888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.456902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.456917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.456931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.456946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.456960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.456975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.456989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.698 [2024-12-08 06:31:53.457568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.698 [2024-12-08 06:31:53.457593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.698 [2024-12-08 06:31:53.457619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.698 [2024-12-08 06:31:53.457646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.698 [2024-12-08 06:31:53.457671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.698 [2024-12-08 06:31:53.457727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.698 [2024-12-08 06:31:53.457761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.698 [2024-12-08 06:31:53.457942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.698 [2024-12-08 06:31:53.457957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.457971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.457999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.699 [2024-12-08 06:31:53.458503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.699 [2024-12-08 06:31:53.458529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.699 [2024-12-08 06:31:53.458555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.699 [2024-12-08 06:31:53.458581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.699 [2024-12-08 06:31:53.458606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.699 [2024-12-08 06:31:53.458632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.699 [2024-12-08 06:31:53.458658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.699 [2024-12-08 06:31:53.458683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.699 [2024-12-08 06:31:53.458732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.699 [2024-12-08 06:31:53.458763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.699 [2024-12-08 06:31:53.458792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.699 [2024-12-08 06:31:53.458825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.699 [2024-12-08 06:31:53.458854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.699 [2024-12-08 06:31:53.458883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.699 [2024-12-08 06:31:53.458918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.699 [2024-12-08 06:31:53.458953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.458968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba4550 is same with the state(6) to be set 00:28:03.699 [2024-12-08 06:31:53.458986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:03.699 [2024-12-08 06:31:53.458997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:03.699 [2024-12-08 06:31:53.459008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47720 len:8 PRP1 0x0 PRP2 0x0 00:28:03.699 [2024-12-08 06:31:53.459036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.459162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.699 [2024-12-08 06:31:53.459183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.459196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.699 [2024-12-08 06:31:53.459207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.699 [2024-12-08 06:31:53.459219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.699 [2024-12-08 06:31:53.459235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.700 [2024-12-08 06:31:53.459247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.700 [2024-12-08 06:31:53.459258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.700 [2024-12-08 06:31:53.459268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.700 [2024-12-08 06:31:53.462592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.700 [2024-12-08 06:31:53.462629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.700 [2024-12-08 06:31:53.463377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.700 [2024-12-08 06:31:53.463404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.700 [2024-12-08 06:31:53.463419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.700 [2024-12-08 06:31:53.463612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.700 [2024-12-08 06:31:53.463848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.700 [2024-12-08 06:31:53.463874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.700 [2024-12-08 06:31:53.463891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.700 [2024-12-08 06:31:53.463907] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.700 [2024-12-08 06:31:53.476175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.700 [2024-12-08 06:31:53.476618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.700 [2024-12-08 06:31:53.476667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.700 [2024-12-08 06:31:53.476682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.700 [2024-12-08 06:31:53.476921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.700 [2024-12-08 06:31:53.477140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.700 [2024-12-08 06:31:53.477159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.700 [2024-12-08 06:31:53.477171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.700 [2024-12-08 06:31:53.477182] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.700 [2024-12-08 06:31:53.489335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.700 [2024-12-08 06:31:53.489788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.700 [2024-12-08 06:31:53.489813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.700 [2024-12-08 06:31:53.489842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.700 [2024-12-08 06:31:53.490032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.700 [2024-12-08 06:31:53.490227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.700 [2024-12-08 06:31:53.490245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.700 [2024-12-08 06:31:53.490257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.700 [2024-12-08 06:31:53.490267] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.700 [2024-12-08 06:31:53.502537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.700 [2024-12-08 06:31:53.502995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.700 [2024-12-08 06:31:53.503034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.700 [2024-12-08 06:31:53.503048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.700 [2024-12-08 06:31:53.503245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.700 [2024-12-08 06:31:53.503439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.700 [2024-12-08 06:31:53.503458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.700 [2024-12-08 06:31:53.503470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.700 [2024-12-08 06:31:53.503481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.700 [2024-12-08 06:31:53.515794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.700 [2024-12-08 06:31:53.516182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.700 [2024-12-08 06:31:53.516221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.700 [2024-12-08 06:31:53.516235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.700 [2024-12-08 06:31:53.516439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.700 [2024-12-08 06:31:53.516634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.700 [2024-12-08 06:31:53.516653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.700 [2024-12-08 06:31:53.516665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.700 [2024-12-08 06:31:53.516676] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.700 [2024-12-08 06:31:53.529233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.700 [2024-12-08 06:31:53.529679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.700 [2024-12-08 06:31:53.529719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.700 [2024-12-08 06:31:53.529746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.700 [2024-12-08 06:31:53.529970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.700 [2024-12-08 06:31:53.530191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.700 [2024-12-08 06:31:53.530211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.700 [2024-12-08 06:31:53.530223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.700 [2024-12-08 06:31:53.530235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.700 [2024-12-08 06:31:53.542606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.700 [2024-12-08 06:31:53.543118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.700 [2024-12-08 06:31:53.543143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.700 [2024-12-08 06:31:53.543172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.700 [2024-12-08 06:31:53.543369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.700 [2024-12-08 06:31:53.543569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.700 [2024-12-08 06:31:53.543593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.700 [2024-12-08 06:31:53.543605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.700 [2024-12-08 06:31:53.543617] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.700 [2024-12-08 06:31:53.556047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.700 [2024-12-08 06:31:53.556453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.700 [2024-12-08 06:31:53.556501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.700 [2024-12-08 06:31:53.556515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.700 [2024-12-08 06:31:53.556752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.700 [2024-12-08 06:31:53.556959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.700 [2024-12-08 06:31:53.556979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.700 [2024-12-08 06:31:53.556991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.700 [2024-12-08 06:31:53.557003] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.700 [2024-12-08 06:31:53.569441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.700 [2024-12-08 06:31:53.569889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.700 [2024-12-08 06:31:53.569931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.700 [2024-12-08 06:31:53.569947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.700 [2024-12-08 06:31:53.570159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.700 [2024-12-08 06:31:53.570360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.700 [2024-12-08 06:31:53.570379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.700 [2024-12-08 06:31:53.570391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.700 [2024-12-08 06:31:53.570403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.700 [2024-12-08 06:31:53.582748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.700 [2024-12-08 06:31:53.583202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.700 [2024-12-08 06:31:53.583240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.701 [2024-12-08 06:31:53.583255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.701 [2024-12-08 06:31:53.583452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.701 [2024-12-08 06:31:53.583652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.701 [2024-12-08 06:31:53.583670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.701 [2024-12-08 06:31:53.583683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.701 [2024-12-08 06:31:53.583699] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.701 [2024-12-08 06:31:53.596105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.701 [2024-12-08 06:31:53.596520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.701 [2024-12-08 06:31:53.596546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.701 [2024-12-08 06:31:53.596574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.701 [2024-12-08 06:31:53.596816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.701 [2024-12-08 06:31:53.597045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.701 [2024-12-08 06:31:53.597065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.701 [2024-12-08 06:31:53.597092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.701 [2024-12-08 06:31:53.597104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.701 [2024-12-08 06:31:53.609430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.701 [2024-12-08 06:31:53.609845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.701 [2024-12-08 06:31:53.609872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.701 [2024-12-08 06:31:53.609886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.701 [2024-12-08 06:31:53.610116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.701 [2024-12-08 06:31:53.610316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.701 [2024-12-08 06:31:53.610336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.701 [2024-12-08 06:31:53.610348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.701 [2024-12-08 06:31:53.610359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.701 [2024-12-08 06:31:53.622741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.701 [2024-12-08 06:31:53.623184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.701 [2024-12-08 06:31:53.623224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.701 [2024-12-08 06:31:53.623239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.701 [2024-12-08 06:31:53.623435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.701 [2024-12-08 06:31:53.623636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.701 [2024-12-08 06:31:53.623655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.701 [2024-12-08 06:31:53.623667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.701 [2024-12-08 06:31:53.623678] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.701 [2024-12-08 06:31:53.636103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.701 [2024-12-08 06:31:53.636542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.701 [2024-12-08 06:31:53.636567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.701 [2024-12-08 06:31:53.636596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.701 [2024-12-08 06:31:53.636838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.701 [2024-12-08 06:31:53.637066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.701 [2024-12-08 06:31:53.637100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.701 [2024-12-08 06:31:53.637112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.701 [2024-12-08 06:31:53.637123] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.701 [2024-12-08 06:31:53.649447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.701 [2024-12-08 06:31:53.649901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.701 [2024-12-08 06:31:53.649941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.701 [2024-12-08 06:31:53.649956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.701 [2024-12-08 06:31:53.650169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.701 [2024-12-08 06:31:53.650370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.701 [2024-12-08 06:31:53.650389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.701 [2024-12-08 06:31:53.650401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.701 [2024-12-08 06:31:53.650412] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.701 [2024-12-08 06:31:53.662831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.701 [2024-12-08 06:31:53.663292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.701 [2024-12-08 06:31:53.663317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.701 [2024-12-08 06:31:53.663346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.701 [2024-12-08 06:31:53.663542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.701 [2024-12-08 06:31:53.663771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.701 [2024-12-08 06:31:53.663791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.701 [2024-12-08 06:31:53.663804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.701 [2024-12-08 06:31:53.663816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.701 [2024-12-08 06:31:53.676186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.701 [2024-12-08 06:31:53.676612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.701 [2024-12-08 06:31:53.676637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.701 [2024-12-08 06:31:53.676666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.701 [2024-12-08 06:31:53.676917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.701 [2024-12-08 06:31:53.677155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.701 [2024-12-08 06:31:53.677174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.701 [2024-12-08 06:31:53.677186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.701 [2024-12-08 06:31:53.677198] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.701 [2024-12-08 06:31:53.689524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.701 [2024-12-08 06:31:53.689939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.701 [2024-12-08 06:31:53.689965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.701 [2024-12-08 06:31:53.689996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.701 [2024-12-08 06:31:53.690209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.701 [2024-12-08 06:31:53.690410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.702 [2024-12-08 06:31:53.690428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.702 [2024-12-08 06:31:53.690440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.702 [2024-12-08 06:31:53.690451] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.702 [2024-12-08 06:31:53.702869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.702 [2024-12-08 06:31:53.703333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.702 [2024-12-08 06:31:53.703358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.702 [2024-12-08 06:31:53.703386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.702 [2024-12-08 06:31:53.703582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.702 [2024-12-08 06:31:53.703828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.702 [2024-12-08 06:31:53.703850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.702 [2024-12-08 06:31:53.703863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.702 [2024-12-08 06:31:53.703875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.702 [2024-12-08 06:31:53.716184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.702 [2024-12-08 06:31:53.716627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.702 [2024-12-08 06:31:53.716669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.702 [2024-12-08 06:31:53.716683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.702 [2024-12-08 06:31:53.716922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.702 [2024-12-08 06:31:53.717152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.702 [2024-12-08 06:31:53.717181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.702 [2024-12-08 06:31:53.717195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.702 [2024-12-08 06:31:53.717208] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.702 [2024-12-08 06:31:53.729971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.702 [2024-12-08 06:31:53.730451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.702 [2024-12-08 06:31:53.730492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.702 [2024-12-08 06:31:53.730508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.702 [2024-12-08 06:31:53.730735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.702 [2024-12-08 06:31:53.730956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.702 [2024-12-08 06:31:53.730977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.702 [2024-12-08 06:31:53.730990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.702 [2024-12-08 06:31:53.731003] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.702 [2024-12-08 06:31:53.743430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.702 [2024-12-08 06:31:53.743876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.702 [2024-12-08 06:31:53.743904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.702 [2024-12-08 06:31:53.743934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.702 [2024-12-08 06:31:53.744150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.702 [2024-12-08 06:31:53.744351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.702 [2024-12-08 06:31:53.744370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.702 [2024-12-08 06:31:53.744382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.702 [2024-12-08 06:31:53.744393] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.702 [2024-12-08 06:31:53.756743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.702 [2024-12-08 06:31:53.757166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.702 [2024-12-08 06:31:53.757191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.702 [2024-12-08 06:31:53.757221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.702 [2024-12-08 06:31:53.757417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.702 [2024-12-08 06:31:53.757617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.702 [2024-12-08 06:31:53.757636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.702 [2024-12-08 06:31:53.757649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.702 [2024-12-08 06:31:53.757665] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.702 [2024-12-08 06:31:53.770082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.702 [2024-12-08 06:31:53.770520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.702 [2024-12-08 06:31:53.770561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.702 [2024-12-08 06:31:53.770575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.702 [2024-12-08 06:31:53.770817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.702 [2024-12-08 06:31:53.771032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.702 [2024-12-08 06:31:53.771069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.702 [2024-12-08 06:31:53.771082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.702 [2024-12-08 06:31:53.771094] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.702 [2024-12-08 06:31:53.783422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.702 [2024-12-08 06:31:53.783830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.702 [2024-12-08 06:31:53.783855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.702 [2024-12-08 06:31:53.783869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.702 [2024-12-08 06:31:53.784099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.702 [2024-12-08 06:31:53.784301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.702 [2024-12-08 06:31:53.784319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.702 [2024-12-08 06:31:53.784332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.702 [2024-12-08 06:31:53.784343] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.702 [2024-12-08 06:31:53.796676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.702 [2024-12-08 06:31:53.797129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.702 [2024-12-08 06:31:53.797168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.702 [2024-12-08 06:31:53.797184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.702 [2024-12-08 06:31:53.797380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.702 [2024-12-08 06:31:53.797580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.702 [2024-12-08 06:31:53.797599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.702 [2024-12-08 06:31:53.797611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.702 [2024-12-08 06:31:53.797622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.963 [2024-12-08 06:31:53.809975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.963 [2024-12-08 06:31:53.810424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-12-08 06:31:53.810464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.963 [2024-12-08 06:31:53.810479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.963 [2024-12-08 06:31:53.810682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.963 [2024-12-08 06:31:53.810924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.963 [2024-12-08 06:31:53.810946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.963 [2024-12-08 06:31:53.810959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.963 [2024-12-08 06:31:53.810971] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.963 [2024-12-08 06:31:53.823256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.963 [2024-12-08 06:31:53.823652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-12-08 06:31:53.823693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.963 [2024-12-08 06:31:53.823707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.963 [2024-12-08 06:31:53.823931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.963 [2024-12-08 06:31:53.824155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.963 [2024-12-08 06:31:53.824174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.963 [2024-12-08 06:31:53.824186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.963 [2024-12-08 06:31:53.824198] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.963 [2024-12-08 06:31:53.836644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.963 [2024-12-08 06:31:53.837012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-12-08 06:31:53.837038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.963 [2024-12-08 06:31:53.837067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.963 [2024-12-08 06:31:53.837263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.963 [2024-12-08 06:31:53.837463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.963 [2024-12-08 06:31:53.837483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.963 [2024-12-08 06:31:53.837495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.963 [2024-12-08 06:31:53.837506] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.963 [2024-12-08 06:31:53.849918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.963 [2024-12-08 06:31:53.850325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-12-08 06:31:53.850364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.963 [2024-12-08 06:31:53.850378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.963 [2024-12-08 06:31:53.850596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.963 [2024-12-08 06:31:53.850829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.963 [2024-12-08 06:31:53.850851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.963 [2024-12-08 06:31:53.850865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.963 [2024-12-08 06:31:53.850877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.963 [2024-12-08 06:31:53.863370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.963 [2024-12-08 06:31:53.863775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-12-08 06:31:53.863817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.963 [2024-12-08 06:31:53.863833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.963 [2024-12-08 06:31:53.864071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.963 [2024-12-08 06:31:53.864279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.963 [2024-12-08 06:31:53.864299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.963 [2024-12-08 06:31:53.864311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.963 [2024-12-08 06:31:53.864323] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.963 7442.33 IOPS, 29.07 MiB/s [2024-12-08T05:31:54.082Z] [2024-12-08 06:31:53.876696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.963 [2024-12-08 06:31:53.877169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-12-08 06:31:53.877195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.963 [2024-12-08 06:31:53.877224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.963 [2024-12-08 06:31:53.877421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.963 [2024-12-08 06:31:53.877621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.963 [2024-12-08 06:31:53.877640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.963 [2024-12-08 06:31:53.877652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.963 [2024-12-08 06:31:53.877663] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.963 [2024-12-08 06:31:53.890103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.963 [2024-12-08 06:31:53.890520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.963 [2024-12-08 06:31:53.890552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.963 [2024-12-08 06:31:53.890580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.963 [2024-12-08 06:31:53.890821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.963 [2024-12-08 06:31:53.891049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.963 [2024-12-08 06:31:53.891074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.963 [2024-12-08 06:31:53.891102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.963 [2024-12-08 06:31:53.891114] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.963 [2024-12-08 06:31:53.903371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.964 [2024-12-08 06:31:53.903757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-12-08 06:31:53.903784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.964 [2024-12-08 06:31:53.903799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.964 [2024-12-08 06:31:53.904001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.964 [2024-12-08 06:31:53.904219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.964 [2024-12-08 06:31:53.904238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.964 [2024-12-08 06:31:53.904250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.964 [2024-12-08 06:31:53.904262] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.964 [2024-12-08 06:31:53.916694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.964 [2024-12-08 06:31:53.917116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-12-08 06:31:53.917156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.964 [2024-12-08 06:31:53.917170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.964 [2024-12-08 06:31:53.917380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.964 [2024-12-08 06:31:53.917581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.964 [2024-12-08 06:31:53.917600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.964 [2024-12-08 06:31:53.917612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.964 [2024-12-08 06:31:53.917623] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.964 [2024-12-08 06:31:53.930112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.964 [2024-12-08 06:31:53.930452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-12-08 06:31:53.930479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.964 [2024-12-08 06:31:53.930493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.964 [2024-12-08 06:31:53.930689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.964 [2024-12-08 06:31:53.930939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.964 [2024-12-08 06:31:53.930971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.964 [2024-12-08 06:31:53.930985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.964 [2024-12-08 06:31:53.931012] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.964 [2024-12-08 06:31:53.943457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.964 [2024-12-08 06:31:53.943802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-12-08 06:31:53.943830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.964 [2024-12-08 06:31:53.943845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.964 [2024-12-08 06:31:53.944064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.964 [2024-12-08 06:31:53.944264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.964 [2024-12-08 06:31:53.944283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.964 [2024-12-08 06:31:53.944295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.964 [2024-12-08 06:31:53.944306] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.964 [2024-12-08 06:31:53.956755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.964 [2024-12-08 06:31:53.957206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-12-08 06:31:53.957245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.964 [2024-12-08 06:31:53.957260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.964 [2024-12-08 06:31:53.957457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.964 [2024-12-08 06:31:53.957658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.964 [2024-12-08 06:31:53.957677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.964 [2024-12-08 06:31:53.957689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.964 [2024-12-08 06:31:53.957700] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.964 [2024-12-08 06:31:53.970144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.964 [2024-12-08 06:31:53.970525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-12-08 06:31:53.970560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.964 [2024-12-08 06:31:53.970576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.964 [2024-12-08 06:31:53.970814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.964 [2024-12-08 06:31:53.971044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.964 [2024-12-08 06:31:53.971067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.964 [2024-12-08 06:31:53.971080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.964 [2024-12-08 06:31:53.971093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.964 [2024-12-08 06:31:53.983482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.964 [2024-12-08 06:31:53.983833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-12-08 06:31:53.983861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.964 [2024-12-08 06:31:53.983877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.964 [2024-12-08 06:31:53.984095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.964 [2024-12-08 06:31:53.984296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.964 [2024-12-08 06:31:53.984316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.964 [2024-12-08 06:31:53.984328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.964 [2024-12-08 06:31:53.984340] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.964 [2024-12-08 06:31:53.997016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.964 [2024-12-08 06:31:53.997371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.964 [2024-12-08 06:31:53.997410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.964 [2024-12-08 06:31:53.997425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.964 [2024-12-08 06:31:53.997636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.964 [2024-12-08 06:31:53.997874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.964 [2024-12-08 06:31:53.997896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.964 [2024-12-08 06:31:53.997909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.964 [2024-12-08 06:31:53.997921] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.965 [2024-12-08 06:31:54.010342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.965 [2024-12-08 06:31:54.010671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-12-08 06:31:54.010697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.965 [2024-12-08 06:31:54.010735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.965 [2024-12-08 06:31:54.010946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.965 [2024-12-08 06:31:54.011182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.965 [2024-12-08 06:31:54.011201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.965 [2024-12-08 06:31:54.011213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.965 [2024-12-08 06:31:54.011225] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.965 [2024-12-08 06:31:54.023717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.965 [2024-12-08 06:31:54.024159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-12-08 06:31:54.024185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.965 [2024-12-08 06:31:54.024213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.965 [2024-12-08 06:31:54.024421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.965 [2024-12-08 06:31:54.024627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.965 [2024-12-08 06:31:54.024647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.965 [2024-12-08 06:31:54.024660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.965 [2024-12-08 06:31:54.024672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.965 [2024-12-08 06:31:54.037226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.965 [2024-12-08 06:31:54.037698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-12-08 06:31:54.037747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.965 [2024-12-08 06:31:54.037763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.965 [2024-12-08 06:31:54.037987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.965 [2024-12-08 06:31:54.038210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.965 [2024-12-08 06:31:54.038230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.965 [2024-12-08 06:31:54.038243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.965 [2024-12-08 06:31:54.038255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.965 [2024-12-08 06:31:54.050555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.965 [2024-12-08 06:31:54.050936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-12-08 06:31:54.050980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.965 [2024-12-08 06:31:54.050994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.965 [2024-12-08 06:31:54.051208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.965 [2024-12-08 06:31:54.051408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.965 [2024-12-08 06:31:54.051427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.965 [2024-12-08 06:31:54.051440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.965 [2024-12-08 06:31:54.051452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.965 [2024-12-08 06:31:54.063876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.965 [2024-12-08 06:31:54.064255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-12-08 06:31:54.064295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.965 [2024-12-08 06:31:54.064309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.965 [2024-12-08 06:31:54.064519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.965 [2024-12-08 06:31:54.064747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.965 [2024-12-08 06:31:54.064772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.965 [2024-12-08 06:31:54.064786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.965 [2024-12-08 06:31:54.064797] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:03.965 [2024-12-08 06:31:54.077215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:03.965 [2024-12-08 06:31:54.077557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.965 [2024-12-08 06:31:54.077583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:03.965 [2024-12-08 06:31:54.077612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:03.965 [2024-12-08 06:31:54.077850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:03.965 [2024-12-08 06:31:54.078058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:03.965 [2024-12-08 06:31:54.078077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:03.965 [2024-12-08 06:31:54.078091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:03.965 [2024-12-08 06:31:54.078102] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.225 [2024-12-08 06:31:54.090606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.225 [2024-12-08 06:31:54.091102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.225 [2024-12-08 06:31:54.091142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.225 [2024-12-08 06:31:54.091160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.225 [2024-12-08 06:31:54.091363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.225 [2024-12-08 06:31:54.091569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.225 [2024-12-08 06:31:54.091589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.225 [2024-12-08 06:31:54.091601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.225 [2024-12-08 06:31:54.091613] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.225 [2024-12-08 06:31:54.104142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.225 [2024-12-08 06:31:54.104561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.225 [2024-12-08 06:31:54.104590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.225 [2024-12-08 06:31:54.104618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.225 [2024-12-08 06:31:54.104870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.225 [2024-12-08 06:31:54.105118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.225 [2024-12-08 06:31:54.105138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.225 [2024-12-08 06:31:54.105151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.225 [2024-12-08 06:31:54.105168] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.225 [2024-12-08 06:31:54.117486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.225 [2024-12-08 06:31:54.117860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.226 [2024-12-08 06:31:54.117902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.226 [2024-12-08 06:31:54.117917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.226 [2024-12-08 06:31:54.118162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.226 [2024-12-08 06:31:54.118367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.226 [2024-12-08 06:31:54.118385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.226 [2024-12-08 06:31:54.118397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.226 [2024-12-08 06:31:54.118409] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.226 [2024-12-08 06:31:54.131038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.226 [2024-12-08 06:31:54.131369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.226 [2024-12-08 06:31:54.131394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.226 [2024-12-08 06:31:54.131408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.226 [2024-12-08 06:31:54.131600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.226 [2024-12-08 06:31:54.131843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.226 [2024-12-08 06:31:54.131864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.226 [2024-12-08 06:31:54.131878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.226 [2024-12-08 06:31:54.131890] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.226 [2024-12-08 06:31:54.144389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.226 [2024-12-08 06:31:54.144727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.226 [2024-12-08 06:31:54.144754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.226 [2024-12-08 06:31:54.144769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.226 [2024-12-08 06:31:54.144973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.226 [2024-12-08 06:31:54.145184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.226 [2024-12-08 06:31:54.145203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.226 [2024-12-08 06:31:54.145215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.226 [2024-12-08 06:31:54.145227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.226 [2024-12-08 06:31:54.157809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.226 [2024-12-08 06:31:54.158180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.226 [2024-12-08 06:31:54.158205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.226 [2024-12-08 06:31:54.158219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.226 [2024-12-08 06:31:54.158410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.226 [2024-12-08 06:31:54.158604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.226 [2024-12-08 06:31:54.158623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.226 [2024-12-08 06:31:54.158635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.226 [2024-12-08 06:31:54.158649] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.226 [2024-12-08 06:31:54.171140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.226 [2024-12-08 06:31:54.171477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.226 [2024-12-08 06:31:54.171501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.226 [2024-12-08 06:31:54.171516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.226 [2024-12-08 06:31:54.171733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.226 [2024-12-08 06:31:54.171940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.226 [2024-12-08 06:31:54.171960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.226 [2024-12-08 06:31:54.171973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.226 [2024-12-08 06:31:54.171986] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.226 [2024-12-08 06:31:54.184459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.226 [2024-12-08 06:31:54.184773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.226 [2024-12-08 06:31:54.184798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.226 [2024-12-08 06:31:54.184813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.226 [2024-12-08 06:31:54.185026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.226 [2024-12-08 06:31:54.185221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.226 [2024-12-08 06:31:54.185239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.226 [2024-12-08 06:31:54.185251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.226 [2024-12-08 06:31:54.185263] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.226 [2024-12-08 06:31:54.197843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.226 [2024-12-08 06:31:54.198233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.226 [2024-12-08 06:31:54.198257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.226 [2024-12-08 06:31:54.198272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.226 [2024-12-08 06:31:54.198467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.226 [2024-12-08 06:31:54.198662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.226 [2024-12-08 06:31:54.198680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.226 [2024-12-08 06:31:54.198692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.226 [2024-12-08 06:31:54.198719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.226 [2024-12-08 06:31:54.211363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.226 [2024-12-08 06:31:54.211697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.226 [2024-12-08 06:31:54.211744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.226 [2024-12-08 06:31:54.211760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.226 [2024-12-08 06:31:54.211964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.226 [2024-12-08 06:31:54.212175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.226 [2024-12-08 06:31:54.212194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.226 [2024-12-08 06:31:54.212206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.226 [2024-12-08 06:31:54.212218] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.226 [2024-12-08 06:31:54.224699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.226 [2024-12-08 06:31:54.225081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.226 [2024-12-08 06:31:54.225105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.226 [2024-12-08 06:31:54.225120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.226 [2024-12-08 06:31:54.225311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.226 [2024-12-08 06:31:54.225504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.226 [2024-12-08 06:31:54.225522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.226 [2024-12-08 06:31:54.225535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.226 [2024-12-08 06:31:54.225548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.226 [2024-12-08 06:31:54.238256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.226 [2024-12-08 06:31:54.238616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.226 [2024-12-08 06:31:54.238642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.226 [2024-12-08 06:31:54.238657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.226 [2024-12-08 06:31:54.238914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.226 [2024-12-08 06:31:54.239141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.226 [2024-12-08 06:31:54.239166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.226 [2024-12-08 06:31:54.239179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.226 [2024-12-08 06:31:54.239192] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.227 [2024-12-08 06:31:54.251648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.227 [2024-12-08 06:31:54.252042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.227 [2024-12-08 06:31:54.252082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.227 [2024-12-08 06:31:54.252097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.227 [2024-12-08 06:31:54.252289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.227 [2024-12-08 06:31:54.252484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.227 [2024-12-08 06:31:54.252502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.227 [2024-12-08 06:31:54.252514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.227 [2024-12-08 06:31:54.252526] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.227 [2024-12-08 06:31:54.265053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.227 [2024-12-08 06:31:54.265403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.227 [2024-12-08 06:31:54.265428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.227 [2024-12-08 06:31:54.265442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.227 [2024-12-08 06:31:54.265634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.227 [2024-12-08 06:31:54.265863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.227 [2024-12-08 06:31:54.265884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.227 [2024-12-08 06:31:54.265897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.227 [2024-12-08 06:31:54.265910] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.227 [2024-12-08 06:31:54.278513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.227 [2024-12-08 06:31:54.278891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.227 [2024-12-08 06:31:54.278918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.227 [2024-12-08 06:31:54.278934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.227 [2024-12-08 06:31:54.279161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.227 [2024-12-08 06:31:54.279356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.227 [2024-12-08 06:31:54.279374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.227 [2024-12-08 06:31:54.279387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.227 [2024-12-08 06:31:54.279404] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.227 [2024-12-08 06:31:54.291919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.227 [2024-12-08 06:31:54.292272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.227 [2024-12-08 06:31:54.292296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.227 [2024-12-08 06:31:54.292311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.227 [2024-12-08 06:31:54.292503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.227 [2024-12-08 06:31:54.292697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.227 [2024-12-08 06:31:54.292740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.227 [2024-12-08 06:31:54.292754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.227 [2024-12-08 06:31:54.292766] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.227 [2024-12-08 06:31:54.305171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.227 [2024-12-08 06:31:54.305555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.227 [2024-12-08 06:31:54.305581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.227 [2024-12-08 06:31:54.305597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.227 [2024-12-08 06:31:54.305837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.227 [2024-12-08 06:31:54.306057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.227 [2024-12-08 06:31:54.306076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.227 [2024-12-08 06:31:54.306089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.227 [2024-12-08 06:31:54.306101] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.227 [2024-12-08 06:31:54.318512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.227 [2024-12-08 06:31:54.318875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.227 [2024-12-08 06:31:54.318901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.227 [2024-12-08 06:31:54.318916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.227 [2024-12-08 06:31:54.319125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.227 [2024-12-08 06:31:54.319319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.227 [2024-12-08 06:31:54.319337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.227 [2024-12-08 06:31:54.319349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.227 [2024-12-08 06:31:54.319362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.227 [2024-12-08 06:31:54.332058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.227 [2024-12-08 06:31:54.332408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.227 [2024-12-08 06:31:54.332433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.227 [2024-12-08 06:31:54.332448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.227 [2024-12-08 06:31:54.332639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.227 [2024-12-08 06:31:54.332866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.227 [2024-12-08 06:31:54.332886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.227 [2024-12-08 06:31:54.332898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.227 [2024-12-08 06:31:54.332911] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.488 [2024-12-08 06:31:54.345511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.488 [2024-12-08 06:31:54.345882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.488 [2024-12-08 06:31:54.345907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.488 [2024-12-08 06:31:54.345923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.488 [2024-12-08 06:31:54.346150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.488 [2024-12-08 06:31:54.346345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.488 [2024-12-08 06:31:54.346364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.488 [2024-12-08 06:31:54.346376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.488 [2024-12-08 06:31:54.346389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.488 [2024-12-08 06:31:54.358929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.488 [2024-12-08 06:31:54.359295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.488 [2024-12-08 06:31:54.359320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.488 [2024-12-08 06:31:54.359334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.488 [2024-12-08 06:31:54.359525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.488 [2024-12-08 06:31:54.359746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.488 [2024-12-08 06:31:54.359766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.488 [2024-12-08 06:31:54.359779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.488 [2024-12-08 06:31:54.359792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.488 [2024-12-08 06:31:54.372295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.488 [2024-12-08 06:31:54.372607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.488 [2024-12-08 06:31:54.372631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.488 [2024-12-08 06:31:54.372646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.488 [2024-12-08 06:31:54.372874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.488 [2024-12-08 06:31:54.373094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.488 [2024-12-08 06:31:54.373113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.488 [2024-12-08 06:31:54.373126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.488 [2024-12-08 06:31:54.373138] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.488 [2024-12-08 06:31:54.385579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.488 [2024-12-08 06:31:54.385945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.488 [2024-12-08 06:31:54.385970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.488 [2024-12-08 06:31:54.385985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.488 [2024-12-08 06:31:54.386192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.488 [2024-12-08 06:31:54.386386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.488 [2024-12-08 06:31:54.386405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.488 [2024-12-08 06:31:54.386417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.488 [2024-12-08 06:31:54.386429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.488 [2024-12-08 06:31:54.398945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.488 [2024-12-08 06:31:54.399307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.488 [2024-12-08 06:31:54.399332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.488 [2024-12-08 06:31:54.399346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.488 [2024-12-08 06:31:54.399538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.488 [2024-12-08 06:31:54.399760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.488 [2024-12-08 06:31:54.399785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.488 [2024-12-08 06:31:54.399799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.488 [2024-12-08 06:31:54.399811] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.489 [2024-12-08 06:31:54.412269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.489 [2024-12-08 06:31:54.412604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.489 [2024-12-08 06:31:54.412628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.489 [2024-12-08 06:31:54.412643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.489 [2024-12-08 06:31:54.412864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.489 [2024-12-08 06:31:54.413079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.489 [2024-12-08 06:31:54.413102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.489 [2024-12-08 06:31:54.413115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.489 [2024-12-08 06:31:54.413127] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.489 [2024-12-08 06:31:54.425677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.489 [2024-12-08 06:31:54.426060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.489 [2024-12-08 06:31:54.426099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.489 [2024-12-08 06:31:54.426114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.489 [2024-12-08 06:31:54.426306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.489 [2024-12-08 06:31:54.426500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.489 [2024-12-08 06:31:54.426519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.489 [2024-12-08 06:31:54.426531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.489 [2024-12-08 06:31:54.426543] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.489 [2024-12-08 06:31:54.439047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.489 [2024-12-08 06:31:54.439350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.489 [2024-12-08 06:31:54.439375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.489 [2024-12-08 06:31:54.439389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.489 [2024-12-08 06:31:54.439581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.489 [2024-12-08 06:31:54.439803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.489 [2024-12-08 06:31:54.439823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.489 [2024-12-08 06:31:54.439836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.489 [2024-12-08 06:31:54.439848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.489 [2024-12-08 06:31:54.452371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.489 [2024-12-08 06:31:54.452728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.489 [2024-12-08 06:31:54.452759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.489 [2024-12-08 06:31:54.452775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.489 [2024-12-08 06:31:54.452979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.489 [2024-12-08 06:31:54.453201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.489 [2024-12-08 06:31:54.453219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.489 [2024-12-08 06:31:54.453231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.489 [2024-12-08 06:31:54.453247] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.489 [2024-12-08 06:31:54.465806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.489 [2024-12-08 06:31:54.466166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.489 [2024-12-08 06:31:54.466190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.489 [2024-12-08 06:31:54.466205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.489 [2024-12-08 06:31:54.466391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.489 [2024-12-08 06:31:54.466579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.489 [2024-12-08 06:31:54.466602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.489 [2024-12-08 06:31:54.466615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.489 [2024-12-08 06:31:54.466626] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.489 [2024-12-08 06:31:54.479095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.489 [2024-12-08 06:31:54.479424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.489 [2024-12-08 06:31:54.479447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.489 [2024-12-08 06:31:54.479461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.489 [2024-12-08 06:31:54.479647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.489 [2024-12-08 06:31:54.480108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.489 [2024-12-08 06:31:54.480128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.489 [2024-12-08 06:31:54.480141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.489 [2024-12-08 06:31:54.480152] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.489 [2024-12-08 06:31:54.492082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.489 [2024-12-08 06:31:54.492387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.489 [2024-12-08 06:31:54.492411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.489 [2024-12-08 06:31:54.492425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.489 [2024-12-08 06:31:54.492611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.489 [2024-12-08 06:31:54.492828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.489 [2024-12-08 06:31:54.492848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.489 [2024-12-08 06:31:54.492865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.489 [2024-12-08 06:31:54.492878] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.489 [2024-12-08 06:31:54.505246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.489 [2024-12-08 06:31:54.505559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.489 [2024-12-08 06:31:54.505582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.489 [2024-12-08 06:31:54.505597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.489 [2024-12-08 06:31:54.505812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.489 [2024-12-08 06:31:54.506027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.489 [2024-12-08 06:31:54.506045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.489 [2024-12-08 06:31:54.506057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.489 [2024-12-08 06:31:54.506068] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.489 [2024-12-08 06:31:54.518415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.489 [2024-12-08 06:31:54.518744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.489 [2024-12-08 06:31:54.518773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.489 [2024-12-08 06:31:54.518788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.489 [2024-12-08 06:31:54.519008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.489 [2024-12-08 06:31:54.519208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.489 [2024-12-08 06:31:54.519226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.489 [2024-12-08 06:31:54.519239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.489 [2024-12-08 06:31:54.519251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.489 [2024-12-08 06:31:54.531671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.489 [2024-12-08 06:31:54.532043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.489 [2024-12-08 06:31:54.604933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.489 [2024-12-08 06:31:54.604962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.489 [2024-12-08 06:31:54.605169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.489 [2024-12-08 06:31:54.605196] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Unable to perform failover, already in progress. 00:28:04.490 [2024-12-08 06:31:54.605390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.490 [2024-12-08 06:31:54.605410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.490 [2024-12-08 06:31:54.605422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.490 [2024-12-08 06:31:54.605433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.752 [2024-12-08 06:31:54.618374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.752 [2024-12-08 06:31:54.618792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.753 [2024-12-08 06:31:54.618819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.753 [2024-12-08 06:31:54.618840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.753 [2024-12-08 06:31:54.619028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.753 [2024-12-08 06:31:54.619218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.753 [2024-12-08 06:31:54.619238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.753 [2024-12-08 06:31:54.619251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.753 [2024-12-08 06:31:54.619263] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.753 [2024-12-08 06:31:54.631386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.753 [2024-12-08 06:31:54.631792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.753 [2024-12-08 06:31:54.631818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.753 [2024-12-08 06:31:54.631832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.753 [2024-12-08 06:31:54.632018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.753 [2024-12-08 06:31:54.632207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.753 [2024-12-08 06:31:54.632226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.753 [2024-12-08 06:31:54.632238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.753 [2024-12-08 06:31:54.632251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.753 [2024-12-08 06:31:54.644572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.753 [2024-12-08 06:31:54.644950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.753 [2024-12-08 06:31:54.644982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.753 [2024-12-08 06:31:54.644997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.753 [2024-12-08 06:31:54.645188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.753 [2024-12-08 06:31:54.645383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.753 [2024-12-08 06:31:54.645404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.753 [2024-12-08 06:31:54.645416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.753 [2024-12-08 06:31:54.645428] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.753 [2024-12-08 06:31:54.657780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.753 [2024-12-08 06:31:54.658199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.753 [2024-12-08 06:31:54.658227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.753 [2024-12-08 06:31:54.658242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.753 [2024-12-08 06:31:54.658441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.753 [2024-12-08 06:31:54.658648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.753 [2024-12-08 06:31:54.658670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.753 [2024-12-08 06:31:54.658683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.753 [2024-12-08 06:31:54.658696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.753 [2024-12-08 06:31:54.671031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.753 [2024-12-08 06:31:54.671436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.753 [2024-12-08 06:31:54.671462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.753 [2024-12-08 06:31:54.671476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.753 [2024-12-08 06:31:54.671663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.753 [2024-12-08 06:31:54.671886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.753 [2024-12-08 06:31:54.671906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.753 [2024-12-08 06:31:54.671919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.753 [2024-12-08 06:31:54.671932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.753 [2024-12-08 06:31:54.684143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.753 [2024-12-08 06:31:54.684556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.753 [2024-12-08 06:31:54.684581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.753 [2024-12-08 06:31:54.684595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.753 [2024-12-08 06:31:54.684794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.753 [2024-12-08 06:31:54.684984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.753 [2024-12-08 06:31:54.685002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.753 [2024-12-08 06:31:54.685015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.753 [2024-12-08 06:31:54.685027] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.753 [2024-12-08 06:31:54.697280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.753 [2024-12-08 06:31:54.697695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.753 [2024-12-08 06:31:54.697742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.753 [2024-12-08 06:31:54.697757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.753 [2024-12-08 06:31:54.697948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.753 [2024-12-08 06:31:54.698153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.753 [2024-12-08 06:31:54.698173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.753 [2024-12-08 06:31:54.698191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.753 [2024-12-08 06:31:54.698204] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.753 [2024-12-08 06:31:54.710460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.753 [2024-12-08 06:31:54.710863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.753 [2024-12-08 06:31:54.710890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.753 [2024-12-08 06:31:54.710905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.753 [2024-12-08 06:31:54.711093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.753 [2024-12-08 06:31:54.711282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.753 [2024-12-08 06:31:54.711302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.753 [2024-12-08 06:31:54.711314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.753 [2024-12-08 06:31:54.711326] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.753 [2024-12-08 06:31:54.723546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.753 [2024-12-08 06:31:54.723970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.753 [2024-12-08 06:31:54.723995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.753 [2024-12-08 06:31:54.724009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.753 [2024-12-08 06:31:54.724203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.753 [2024-12-08 06:31:54.724393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.753 [2024-12-08 06:31:54.724413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.753 [2024-12-08 06:31:54.724426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.753 [2024-12-08 06:31:54.724438] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.753 [2024-12-08 06:31:54.736788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.753 [2024-12-08 06:31:54.737189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.753 [2024-12-08 06:31:54.737224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.753 [2024-12-08 06:31:54.737238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.753 [2024-12-08 06:31:54.737423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.753 [2024-12-08 06:31:54.737613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.753 [2024-12-08 06:31:54.737633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.754 [2024-12-08 06:31:54.737645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.754 [2024-12-08 06:31:54.737657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.754 [2024-12-08 06:31:54.749871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.754 [2024-12-08 06:31:54.750245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-12-08 06:31:54.750270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-12-08 06:31:54.750284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.754 [2024-12-08 06:31:54.750470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.754 [2024-12-08 06:31:54.750659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.754 [2024-12-08 06:31:54.750679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.754 [2024-12-08 06:31:54.750692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.754 [2024-12-08 06:31:54.750704] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.754 [2024-12-08 06:31:54.763016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.754 [2024-12-08 06:31:54.763439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-12-08 06:31:54.763466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-12-08 06:31:54.763481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.754 [2024-12-08 06:31:54.763668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.754 [2024-12-08 06:31:54.763890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.754 [2024-12-08 06:31:54.763912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.754 [2024-12-08 06:31:54.763926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.754 [2024-12-08 06:31:54.763938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.754 [2024-12-08 06:31:54.776253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.754 [2024-12-08 06:31:54.776603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-12-08 06:31:54.776629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-12-08 06:31:54.776644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.754 [2024-12-08 06:31:54.776844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.754 [2024-12-08 06:31:54.777041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.754 [2024-12-08 06:31:54.777062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.754 [2024-12-08 06:31:54.777074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.754 [2024-12-08 06:31:54.777087] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.754 [2024-12-08 06:31:54.789360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.754 [2024-12-08 06:31:54.789778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-12-08 06:31:54.789804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-12-08 06:31:54.789823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.754 [2024-12-08 06:31:54.790041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.754 [2024-12-08 06:31:54.790231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.754 [2024-12-08 06:31:54.790251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.754 [2024-12-08 06:31:54.790263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.754 [2024-12-08 06:31:54.790275] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.754 [2024-12-08 06:31:54.802452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.754 [2024-12-08 06:31:54.802854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-12-08 06:31:54.802880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-12-08 06:31:54.802894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.754 [2024-12-08 06:31:54.803080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.754 [2024-12-08 06:31:54.803269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.754 [2024-12-08 06:31:54.803289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.754 [2024-12-08 06:31:54.803301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.754 [2024-12-08 06:31:54.803313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.754 [2024-12-08 06:31:54.815450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.754 [2024-12-08 06:31:54.815824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-12-08 06:31:54.815850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-12-08 06:31:54.815865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.754 [2024-12-08 06:31:54.816050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.754 [2024-12-08 06:31:54.816239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.754 [2024-12-08 06:31:54.816259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.754 [2024-12-08 06:31:54.816271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.754 [2024-12-08 06:31:54.816283] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.754 [2024-12-08 06:31:54.828715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.754 [2024-12-08 06:31:54.829104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-12-08 06:31:54.829129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-12-08 06:31:54.829143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.754 [2024-12-08 06:31:54.829329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.754 [2024-12-08 06:31:54.829523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.754 [2024-12-08 06:31:54.829542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.754 [2024-12-08 06:31:54.829554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.754 [2024-12-08 06:31:54.829566] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.754 [2024-12-08 06:31:54.841800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.754 [2024-12-08 06:31:54.842216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-12-08 06:31:54.842266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-12-08 06:31:54.842280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.754 [2024-12-08 06:31:54.842466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.754 [2024-12-08 06:31:54.842655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.754 [2024-12-08 06:31:54.842673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.754 [2024-12-08 06:31:54.842686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.754 [2024-12-08 06:31:54.842698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.754 [2024-12-08 06:31:54.854959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.754 [2024-12-08 06:31:54.855382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-12-08 06:31:54.855415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-12-08 06:31:54.855431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.754 [2024-12-08 06:31:54.855627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.754 [2024-12-08 06:31:54.855883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.754 [2024-12-08 06:31:54.855910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.754 [2024-12-08 06:31:54.855926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.754 [2024-12-08 06:31:54.855940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:04.754 [2024-12-08 06:31:54.868779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:04.754 [2024-12-08 06:31:54.869197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-12-08 06:31:54.869225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-12-08 06:31:54.869241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:04.754 [2024-12-08 06:31:54.869461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:04.755 [2024-12-08 06:31:54.869658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:04.755 [2024-12-08 06:31:54.869679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:04.755 [2024-12-08 06:31:54.869712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:04.755 [2024-12-08 06:31:54.869737] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.018 5581.75 IOPS, 21.80 MiB/s [2024-12-08T05:31:55.137Z] [2024-12-08 06:31:54.882088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.018 [2024-12-08 06:31:54.882497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.018 [2024-12-08 06:31:54.882523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.018 [2024-12-08 06:31:54.882538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.018 [2024-12-08 06:31:54.882754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.018 [2024-12-08 06:31:54.882970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.018 [2024-12-08 06:31:54.883012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.018 [2024-12-08 06:31:54.883027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.018 [2024-12-08 06:31:54.883040] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.018 [2024-12-08 06:31:54.895154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.018 [2024-12-08 06:31:54.895555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.018 [2024-12-08 06:31:54.895582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.018 [2024-12-08 06:31:54.895596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.018 [2024-12-08 06:31:54.895813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.018 [2024-12-08 06:31:54.896009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.018 [2024-12-08 06:31:54.896030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.018 [2024-12-08 06:31:54.896042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.018 [2024-12-08 06:31:54.896055] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.018 [2024-12-08 06:31:54.908514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.018 [2024-12-08 06:31:54.908866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.019 [2024-12-08 06:31:54.908893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.019 [2024-12-08 06:31:54.908908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.019 [2024-12-08 06:31:54.909111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.019 [2024-12-08 06:31:54.909303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.019 [2024-12-08 06:31:54.909323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.019 [2024-12-08 06:31:54.909336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.019 [2024-12-08 06:31:54.909348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.019 [2024-12-08 06:31:54.921564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.019 [2024-12-08 06:31:54.921929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.019 [2024-12-08 06:31:54.921955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.019 [2024-12-08 06:31:54.921969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.019 [2024-12-08 06:31:54.922155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.019 [2024-12-08 06:31:54.922346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.019 [2024-12-08 06:31:54.922366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.019 [2024-12-08 06:31:54.922379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.019 [2024-12-08 06:31:54.922391] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.019 [2024-12-08 06:31:54.934865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.019 [2024-12-08 06:31:54.935299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.019 [2024-12-08 06:31:54.935325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.019 [2024-12-08 06:31:54.935339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.019 [2024-12-08 06:31:54.935530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.019 [2024-12-08 06:31:54.935767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.019 [2024-12-08 06:31:54.935789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.019 [2024-12-08 06:31:54.935803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.019 [2024-12-08 06:31:54.935815] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.019 [2024-12-08 06:31:54.947963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.019 [2024-12-08 06:31:54.948381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.019 [2024-12-08 06:31:54.948406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.019 [2024-12-08 06:31:54.948420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.019 [2024-12-08 06:31:54.948605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.019 [2024-12-08 06:31:54.948827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.019 [2024-12-08 06:31:54.948859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.019 [2024-12-08 06:31:54.948872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.019 [2024-12-08 06:31:54.948885] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.019 [2024-12-08 06:31:54.961146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.019 [2024-12-08 06:31:54.961574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.019 [2024-12-08 06:31:54.961629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.019 [2024-12-08 06:31:54.961644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.019 [2024-12-08 06:31:54.961867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.019 [2024-12-08 06:31:54.962076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.019 [2024-12-08 06:31:54.962096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.019 [2024-12-08 06:31:54.962108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.019 [2024-12-08 06:31:54.962120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.019 [2024-12-08 06:31:54.974147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.019 [2024-12-08 06:31:54.974496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.019 [2024-12-08 06:31:54.974522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.019 [2024-12-08 06:31:54.974537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.019 [2024-12-08 06:31:54.974751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.019 [2024-12-08 06:31:54.974948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.019 [2024-12-08 06:31:54.974969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.019 [2024-12-08 06:31:54.974982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.019 [2024-12-08 06:31:54.974994] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.019 [2024-12-08 06:31:54.987561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.019 [2024-12-08 06:31:54.987937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.019 [2024-12-08 06:31:54.987966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.019 [2024-12-08 06:31:54.987981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.019 [2024-12-08 06:31:54.988204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.019 [2024-12-08 06:31:54.988421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.019 [2024-12-08 06:31:54.988442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.019 [2024-12-08 06:31:54.988455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.019 [2024-12-08 06:31:54.988466] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.019 [2024-12-08 06:31:55.000913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.019 [2024-12-08 06:31:55.001244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.019 [2024-12-08 06:31:55.001269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.019 [2024-12-08 06:31:55.001283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.019 [2024-12-08 06:31:55.001474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.019 [2024-12-08 06:31:55.001665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.019 [2024-12-08 06:31:55.001685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.019 [2024-12-08 06:31:55.001698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.019 [2024-12-08 06:31:55.001737] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.019 [2024-12-08 06:31:55.014356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.019 [2024-12-08 06:31:55.014701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.019 [2024-12-08 06:31:55.014735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.019 [2024-12-08 06:31:55.014752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.019 [2024-12-08 06:31:55.014950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.019 [2024-12-08 06:31:55.015199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.019 [2024-12-08 06:31:55.015219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.019 [2024-12-08 06:31:55.015231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.019 [2024-12-08 06:31:55.015244] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.019 [2024-12-08 06:31:55.027501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.019 [2024-12-08 06:31:55.027818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.019 [2024-12-08 06:31:55.027844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.019 [2024-12-08 06:31:55.027860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.019 [2024-12-08 06:31:55.028065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.019 [2024-12-08 06:31:55.028256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.019 [2024-12-08 06:31:55.028276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.019 [2024-12-08 06:31:55.028289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.019 [2024-12-08 06:31:55.028302] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.020 [2024-12-08 06:31:55.040748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.020 [2024-12-08 06:31:55.041091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.020 [2024-12-08 06:31:55.041116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.020 [2024-12-08 06:31:55.041130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.020 [2024-12-08 06:31:55.041316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.020 [2024-12-08 06:31:55.041507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.020 [2024-12-08 06:31:55.041526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.020 [2024-12-08 06:31:55.041544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.020 [2024-12-08 06:31:55.041556] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.020 [2024-12-08 06:31:55.054007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.020 [2024-12-08 06:31:55.054360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.020 [2024-12-08 06:31:55.054386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.020 [2024-12-08 06:31:55.054400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.020 [2024-12-08 06:31:55.054586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.020 [2024-12-08 06:31:55.054805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.020 [2024-12-08 06:31:55.054826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.020 [2024-12-08 06:31:55.054839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.020 [2024-12-08 06:31:55.054851] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.020 [2024-12-08 06:31:55.067283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.020 [2024-12-08 06:31:55.067625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.020 [2024-12-08 06:31:55.067650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.020 [2024-12-08 06:31:55.067664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.020 [2024-12-08 06:31:55.067877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.020 [2024-12-08 06:31:55.068087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.020 [2024-12-08 06:31:55.068108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.020 [2024-12-08 06:31:55.068120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.020 [2024-12-08 06:31:55.068132] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.020 [2024-12-08 06:31:55.080425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.020 [2024-12-08 06:31:55.080777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.020 [2024-12-08 06:31:55.080803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.020 [2024-12-08 06:31:55.080817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.020 [2024-12-08 06:31:55.081008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.020 [2024-12-08 06:31:55.081214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.020 [2024-12-08 06:31:55.081234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.020 [2024-12-08 06:31:55.081246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.020 [2024-12-08 06:31:55.081258] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.020 [2024-12-08 06:31:55.093532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.020 [2024-12-08 06:31:55.093847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.020 [2024-12-08 06:31:55.093873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.020 [2024-12-08 06:31:55.093888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.020 [2024-12-08 06:31:55.094091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.020 [2024-12-08 06:31:55.094281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.020 [2024-12-08 06:31:55.094300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.020 [2024-12-08 06:31:55.094312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.020 [2024-12-08 06:31:55.094324] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.020 [2024-12-08 06:31:55.106777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.020 [2024-12-08 06:31:55.107158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.020 [2024-12-08 06:31:55.107185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.020 [2024-12-08 06:31:55.107201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.020 [2024-12-08 06:31:55.107425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.020 [2024-12-08 06:31:55.107648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.020 [2024-12-08 06:31:55.107670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.020 [2024-12-08 06:31:55.107683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.020 [2024-12-08 06:31:55.107696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.020 [2024-12-08 06:31:55.120024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.020 [2024-12-08 06:31:55.120356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.020 [2024-12-08 06:31:55.120381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.020 [2024-12-08 06:31:55.120396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.020 [2024-12-08 06:31:55.120582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.020 [2024-12-08 06:31:55.120816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.020 [2024-12-08 06:31:55.120838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.020 [2024-12-08 06:31:55.120851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.020 [2024-12-08 06:31:55.120864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.020 [2024-12-08 06:31:55.133460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.020 [2024-12-08 06:31:55.133793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.020 [2024-12-08 06:31:55.133831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.020 [2024-12-08 06:31:55.133848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.297 [2024-12-08 06:31:55.134090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.297 [2024-12-08 06:31:55.134282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.297 [2024-12-08 06:31:55.134301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.297 [2024-12-08 06:31:55.134314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.297 [2024-12-08 06:31:55.134326] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.297 [2024-12-08 06:31:55.146764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.297 [2024-12-08 06:31:55.147125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.297 [2024-12-08 06:31:55.147172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.297 [2024-12-08 06:31:55.147187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.297 [2024-12-08 06:31:55.147377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.297 [2024-12-08 06:31:55.147572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.297 [2024-12-08 06:31:55.147591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.297 [2024-12-08 06:31:55.147604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.297 [2024-12-08 06:31:55.147616] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.297 [2024-12-08 06:31:55.160428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.297 [2024-12-08 06:31:55.160787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.297 [2024-12-08 06:31:55.160816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.297 [2024-12-08 06:31:55.160832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.297 [2024-12-08 06:31:55.161055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.297 [2024-12-08 06:31:55.161266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.297 [2024-12-08 06:31:55.161285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.297 [2024-12-08 06:31:55.161298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.297 [2024-12-08 06:31:55.161309] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.297 [2024-12-08 06:31:55.173875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.297 [2024-12-08 06:31:55.174326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.297 [2024-12-08 06:31:55.174352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.297 [2024-12-08 06:31:55.174366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.297 [2024-12-08 06:31:55.174563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.297 [2024-12-08 06:31:55.174806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.297 [2024-12-08 06:31:55.174829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.297 [2024-12-08 06:31:55.174844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.297 [2024-12-08 06:31:55.174857] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.297 [2024-12-08 06:31:55.187342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.297 [2024-12-08 06:31:55.187695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.297 [2024-12-08 06:31:55.187755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.297 [2024-12-08 06:31:55.187771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.297 [2024-12-08 06:31:55.187974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.297 [2024-12-08 06:31:55.188202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.297 [2024-12-08 06:31:55.188223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.297 [2024-12-08 06:31:55.188236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.297 [2024-12-08 06:31:55.188248] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.297 [2024-12-08 06:31:55.200598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.297 [2024-12-08 06:31:55.200942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.297 [2024-12-08 06:31:55.200968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.297 [2024-12-08 06:31:55.200990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.297 [2024-12-08 06:31:55.201194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.297 [2024-12-08 06:31:55.201384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.298 [2024-12-08 06:31:55.201403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.298 [2024-12-08 06:31:55.201415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.298 [2024-12-08 06:31:55.201427] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.298 [2024-12-08 06:31:55.213973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.298 [2024-12-08 06:31:55.214318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.298 [2024-12-08 06:31:55.214343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.298 [2024-12-08 06:31:55.214357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.298 [2024-12-08 06:31:55.214543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.298 [2024-12-08 06:31:55.214759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.298 [2024-12-08 06:31:55.214780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.298 [2024-12-08 06:31:55.214803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.298 [2024-12-08 06:31:55.214816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.298 [2024-12-08 06:31:55.227205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.298 [2024-12-08 06:31:55.227536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.298 [2024-12-08 06:31:55.227561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.298 [2024-12-08 06:31:55.227574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.298 [2024-12-08 06:31:55.227788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.298 [2024-12-08 06:31:55.227984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.298 [2024-12-08 06:31:55.228004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.298 [2024-12-08 06:31:55.228016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.298 [2024-12-08 06:31:55.228028] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.298 [2024-12-08 06:31:55.240572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.298 [2024-12-08 06:31:55.240982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.298 [2024-12-08 06:31:55.241009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.298 [2024-12-08 06:31:55.241039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.298 [2024-12-08 06:31:55.241230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.298 [2024-12-08 06:31:55.241424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.298 [2024-12-08 06:31:55.241445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.298 [2024-12-08 06:31:55.241457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.298 [2024-12-08 06:31:55.241469] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.298 [2024-12-08 06:31:55.253946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.298 [2024-12-08 06:31:55.254380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.298 [2024-12-08 06:31:55.254406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.298 [2024-12-08 06:31:55.254420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.298 [2024-12-08 06:31:55.254612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.298 [2024-12-08 06:31:55.254840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.298 [2024-12-08 06:31:55.254863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.298 [2024-12-08 06:31:55.254877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.298 [2024-12-08 06:31:55.254890] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.298 [2024-12-08 06:31:55.267167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.298 [2024-12-08 06:31:55.267529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.298 [2024-12-08 06:31:55.267555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.298 [2024-12-08 06:31:55.267569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.298 [2024-12-08 06:31:55.267800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.298 [2024-12-08 06:31:55.267995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.298 [2024-12-08 06:31:55.268015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.298 [2024-12-08 06:31:55.268028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.298 [2024-12-08 06:31:55.268040] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.298 [2024-12-08 06:31:55.280223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.298 [2024-12-08 06:31:55.280599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.298 [2024-12-08 06:31:55.280624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.298 [2024-12-08 06:31:55.280638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.298 [2024-12-08 06:31:55.280854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.298 [2024-12-08 06:31:55.281065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.298 [2024-12-08 06:31:55.281085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.298 [2024-12-08 06:31:55.281098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.298 [2024-12-08 06:31:55.281110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.298 [2024-12-08 06:31:55.293419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.298 [2024-12-08 06:31:55.293815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.298 [2024-12-08 06:31:55.293841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.298 [2024-12-08 06:31:55.293856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.298 [2024-12-08 06:31:55.294060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.298 [2024-12-08 06:31:55.294252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.298 [2024-12-08 06:31:55.294272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.298 [2024-12-08 06:31:55.294284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.298 [2024-12-08 06:31:55.294296] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.298 [2024-12-08 06:31:55.306635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.298 [2024-12-08 06:31:55.307025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.298 [2024-12-08 06:31:55.307055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.298 [2024-12-08 06:31:55.307069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.298 [2024-12-08 06:31:55.307260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.298 [2024-12-08 06:31:55.307449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.298 [2024-12-08 06:31:55.307470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.298 [2024-12-08 06:31:55.307483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.298 [2024-12-08 06:31:55.307496] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.298 [2024-12-08 06:31:55.319728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.298 [2024-12-08 06:31:55.320134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.298 [2024-12-08 06:31:55.320158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.298 [2024-12-08 06:31:55.320172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.298 [2024-12-08 06:31:55.320358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.298 [2024-12-08 06:31:55.320548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.298 [2024-12-08 06:31:55.320568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.298 [2024-12-08 06:31:55.320581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.298 [2024-12-08 06:31:55.320593] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.298 [2024-12-08 06:31:55.332930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.298 [2024-12-08 06:31:55.333357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.298 [2024-12-08 06:31:55.333409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.298 [2024-12-08 06:31:55.333424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.298 [2024-12-08 06:31:55.333609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.298 [2024-12-08 06:31:55.333832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.298 [2024-12-08 06:31:55.333852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.298 [2024-12-08 06:31:55.333864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.298 [2024-12-08 06:31:55.333876] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.298 [2024-12-08 06:31:55.346118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.298 [2024-12-08 06:31:55.346502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.298 [2024-12-08 06:31:55.346528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.298 [2024-12-08 06:31:55.346542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.298 [2024-12-08 06:31:55.346753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.298 [2024-12-08 06:31:55.346955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.298 [2024-12-08 06:31:55.346977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.298 [2024-12-08 06:31:55.346990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.298 [2024-12-08 06:31:55.347002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.298 [2024-12-08 06:31:55.359251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.298 [2024-12-08 06:31:55.359686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.298 [2024-12-08 06:31:55.359714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.298 [2024-12-08 06:31:55.359755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.298 [2024-12-08 06:31:55.359960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.298 [2024-12-08 06:31:55.360202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.298 [2024-12-08 06:31:55.360225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.298 [2024-12-08 06:31:55.360254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.298 [2024-12-08 06:31:55.360268] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.298 [2024-12-08 06:31:55.372459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.298 [2024-12-08 06:31:55.372871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.298 [2024-12-08 06:31:55.372899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.298 [2024-12-08 06:31:55.372914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.299 [2024-12-08 06:31:55.373120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.299 [2024-12-08 06:31:55.373310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.299 [2024-12-08 06:31:55.373330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.299 [2024-12-08 06:31:55.373343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.299 [2024-12-08 06:31:55.373355] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.299 [2024-12-08 06:31:55.385677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.299 [2024-12-08 06:31:55.386076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.299 [2024-12-08 06:31:55.386102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.299 [2024-12-08 06:31:55.386116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.299 [2024-12-08 06:31:55.386301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.299 [2024-12-08 06:31:55.386493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.299 [2024-12-08 06:31:55.386513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.299 [2024-12-08 06:31:55.386531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.299 [2024-12-08 06:31:55.386544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.299 [2024-12-08 06:31:55.398912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.299 [2024-12-08 06:31:55.399325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.299 [2024-12-08 06:31:55.399350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.299 [2024-12-08 06:31:55.399364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.299 [2024-12-08 06:31:55.399558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.299 [2024-12-08 06:31:55.399779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.299 [2024-12-08 06:31:55.399801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.299 [2024-12-08 06:31:55.399814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.299 [2024-12-08 06:31:55.399827] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.581 [2024-12-08 06:31:55.412003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.581 [2024-12-08 06:31:55.412339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.581 [2024-12-08 06:31:55.412365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.581 [2024-12-08 06:31:55.412380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.581 [2024-12-08 06:31:55.412572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.581 [2024-12-08 06:31:55.412781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.581 [2024-12-08 06:31:55.412801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.581 [2024-12-08 06:31:55.412814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.581 [2024-12-08 06:31:55.412826] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.581 [2024-12-08 06:31:55.425229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.581 [2024-12-08 06:31:55.425548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.581 [2024-12-08 06:31:55.425574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.581 [2024-12-08 06:31:55.425589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.581 [2024-12-08 06:31:55.425792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.581 [2024-12-08 06:31:55.425988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.581 [2024-12-08 06:31:55.426008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.581 [2024-12-08 06:31:55.426020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.581 [2024-12-08 06:31:55.426032] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.581 [2024-12-08 06:31:55.438274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.581 [2024-12-08 06:31:55.438663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.581 [2024-12-08 06:31:55.438688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.581 [2024-12-08 06:31:55.438702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.581 [2024-12-08 06:31:55.438917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.581 [2024-12-08 06:31:55.439126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.581 [2024-12-08 06:31:55.439147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.581 [2024-12-08 06:31:55.439159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.581 [2024-12-08 06:31:55.439172] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.581 [2024-12-08 06:31:55.451671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.581 [2024-12-08 06:31:55.452118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.581 [2024-12-08 06:31:55.452145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.581 [2024-12-08 06:31:55.452159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.581 [2024-12-08 06:31:55.452350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.581 [2024-12-08 06:31:55.452545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.581 [2024-12-08 06:31:55.452565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.581 [2024-12-08 06:31:55.452578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.581 [2024-12-08 06:31:55.452590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.581 [2024-12-08 06:31:55.465152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.581 [2024-12-08 06:31:55.465506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.581 [2024-12-08 06:31:55.465531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.581 [2024-12-08 06:31:55.465545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.581 [2024-12-08 06:31:55.465760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.581 [2024-12-08 06:31:55.465969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.581 [2024-12-08 06:31:55.465990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.581 [2024-12-08 06:31:55.466004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.581 [2024-12-08 06:31:55.466030] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.581 [2024-12-08 06:31:55.478532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.581 [2024-12-08 06:31:55.478867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.582 [2024-12-08 06:31:55.478895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.582 [2024-12-08 06:31:55.478916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.582 [2024-12-08 06:31:55.479141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.582 [2024-12-08 06:31:55.479331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.582 [2024-12-08 06:31:55.479350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.582 [2024-12-08 06:31:55.479362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.582 [2024-12-08 06:31:55.479373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.582 [2024-12-08 06:31:55.491763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.582 [2024-12-08 06:31:55.492121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.582 [2024-12-08 06:31:55.492146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.582 [2024-12-08 06:31:55.492161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.582 [2024-12-08 06:31:55.492346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.582 [2024-12-08 06:31:55.492537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.582 [2024-12-08 06:31:55.492556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.582 [2024-12-08 06:31:55.492568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.582 [2024-12-08 06:31:55.492580] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.582 [2024-12-08 06:31:55.505019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.582 [2024-12-08 06:31:55.505375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.582 [2024-12-08 06:31:55.505401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.582 [2024-12-08 06:31:55.505415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.582 [2024-12-08 06:31:55.505600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.582 [2024-12-08 06:31:55.505853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.582 [2024-12-08 06:31:55.505875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.582 [2024-12-08 06:31:55.505888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.582 [2024-12-08 06:31:55.505901] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.582 [2024-12-08 06:31:55.518081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.582 [2024-12-08 06:31:55.518419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.582 [2024-12-08 06:31:55.518444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.582 [2024-12-08 06:31:55.518458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.582 [2024-12-08 06:31:55.518644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.582 [2024-12-08 06:31:55.518885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.582 [2024-12-08 06:31:55.518907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.582 [2024-12-08 06:31:55.518920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.582 [2024-12-08 06:31:55.518933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.582 [2024-12-08 06:31:55.531188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.582 [2024-12-08 06:31:55.531495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.582 [2024-12-08 06:31:55.531520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.582 [2024-12-08 06:31:55.531534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.582 [2024-12-08 06:31:55.531730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.582 [2024-12-08 06:31:55.531942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.582 [2024-12-08 06:31:55.531962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.582 [2024-12-08 06:31:55.531975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.582 [2024-12-08 06:31:55.531986] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.582 [2024-12-08 06:31:55.544203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.582 [2024-12-08 06:31:55.544506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.582 [2024-12-08 06:31:55.544530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.582 [2024-12-08 06:31:55.544545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.582 [2024-12-08 06:31:55.544755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.582 [2024-12-08 06:31:55.544953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.582 [2024-12-08 06:31:55.544972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.582 [2024-12-08 06:31:55.544985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.582 [2024-12-08 06:31:55.544996] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.582 [2024-12-08 06:31:55.557346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.582 [2024-12-08 06:31:55.557678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.582 [2024-12-08 06:31:55.557702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.582 [2024-12-08 06:31:55.557716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.582 [2024-12-08 06:31:55.557936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.582 [2024-12-08 06:31:55.558148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.582 [2024-12-08 06:31:55.558168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.582 [2024-12-08 06:31:55.558185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.582 [2024-12-08 06:31:55.558196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.582 [2024-12-08 06:31:55.570578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.582 [2024-12-08 06:31:55.570891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.582 [2024-12-08 06:31:55.570916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.582 [2024-12-08 06:31:55.570931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.582 [2024-12-08 06:31:55.571116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.582 [2024-12-08 06:31:55.571306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.582 [2024-12-08 06:31:55.571325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.582 [2024-12-08 06:31:55.571337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.582 [2024-12-08 06:31:55.571349] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.582 [2024-12-08 06:31:55.583713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.582 [2024-12-08 06:31:55.584059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.582 [2024-12-08 06:31:55.584083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.582 [2024-12-08 06:31:55.584098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.582 [2024-12-08 06:31:55.584283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.582 [2024-12-08 06:31:55.584473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.582 [2024-12-08 06:31:55.584492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.582 [2024-12-08 06:31:55.584505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.582 [2024-12-08 06:31:55.584517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.582 [2024-12-08 06:31:55.596766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.582 [2024-12-08 06:31:55.597105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.582 [2024-12-08 06:31:55.597129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.582 [2024-12-08 06:31:55.597143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.582 [2024-12-08 06:31:55.597328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.582 [2024-12-08 06:31:55.597519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.582 [2024-12-08 06:31:55.597538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.583 [2024-12-08 06:31:55.597550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.583 [2024-12-08 06:31:55.597561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.583 [2024-12-08 06:31:55.609823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.583 [2024-12-08 06:31:55.610215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.583 [2024-12-08 06:31:55.610268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.583 [2024-12-08 06:31:55.610284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.583 [2024-12-08 06:31:55.610507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.583 [2024-12-08 06:31:55.610736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.583 [2024-12-08 06:31:55.610758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.583 [2024-12-08 06:31:55.610791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.583 [2024-12-08 06:31:55.610805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.583 [2024-12-08 06:31:55.623076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.583 [2024-12-08 06:31:55.623402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.583 [2024-12-08 06:31:55.623427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.583 [2024-12-08 06:31:55.623441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.583 [2024-12-08 06:31:55.623627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.583 [2024-12-08 06:31:55.623864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.583 [2024-12-08 06:31:55.623886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.583 [2024-12-08 06:31:55.623899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.583 [2024-12-08 06:31:55.623911] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.583 [2024-12-08 06:31:55.636115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.583 [2024-12-08 06:31:55.636447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.583 [2024-12-08 06:31:55.636497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.583 [2024-12-08 06:31:55.636511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.583 [2024-12-08 06:31:55.636697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.583 [2024-12-08 06:31:55.636922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.583 [2024-12-08 06:31:55.636943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.583 [2024-12-08 06:31:55.636957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.583 [2024-12-08 06:31:55.636969] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.583 [2024-12-08 06:31:55.649181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.583 [2024-12-08 06:31:55.649532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.583 [2024-12-08 06:31:55.649583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.583 [2024-12-08 06:31:55.649602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.583 [2024-12-08 06:31:55.649818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.583 [2024-12-08 06:31:55.650021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.583 [2024-12-08 06:31:55.650055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.583 [2024-12-08 06:31:55.650068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.583 [2024-12-08 06:31:55.650079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.583 [2024-12-08 06:31:55.662255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.583 [2024-12-08 06:31:55.662607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.583 [2024-12-08 06:31:55.662656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.583 [2024-12-08 06:31:55.662670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.583 [2024-12-08 06:31:55.662888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.583 [2024-12-08 06:31:55.663103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.583 [2024-12-08 06:31:55.663122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.583 [2024-12-08 06:31:55.663134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.583 [2024-12-08 06:31:55.663146] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.583 [2024-12-08 06:31:55.675397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.583 [2024-12-08 06:31:55.675770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.583 [2024-12-08 06:31:55.675795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.583 [2024-12-08 06:31:55.675809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.583 [2024-12-08 06:31:55.675995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.583 [2024-12-08 06:31:55.676184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.583 [2024-12-08 06:31:55.676203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.583 [2024-12-08 06:31:55.676216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.583 [2024-12-08 06:31:55.676227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.583 [2024-12-08 06:31:55.688619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.583 [2024-12-08 06:31:55.688993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.583 [2024-12-08 06:31:55.689041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.583 [2024-12-08 06:31:55.689055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.583 [2024-12-08 06:31:55.689247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.844 [2024-12-08 06:31:55.689447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.844 [2024-12-08 06:31:55.689467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.844 [2024-12-08 06:31:55.689480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.844 [2024-12-08 06:31:55.689492] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.844 [2024-12-08 06:31:55.701873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.844 [2024-12-08 06:31:55.702247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-12-08 06:31:55.702296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.845 [2024-12-08 06:31:55.702310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.845 [2024-12-08 06:31:55.702496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.845 [2024-12-08 06:31:55.702686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.845 [2024-12-08 06:31:55.702727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.845 [2024-12-08 06:31:55.702742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.845 [2024-12-08 06:31:55.702755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.845 [2024-12-08 06:31:55.714964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.845 [2024-12-08 06:31:55.715321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-12-08 06:31:55.715373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.845 [2024-12-08 06:31:55.715387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.845 [2024-12-08 06:31:55.715572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.845 [2024-12-08 06:31:55.715790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.845 [2024-12-08 06:31:55.715811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.845 [2024-12-08 06:31:55.715824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.845 [2024-12-08 06:31:55.715835] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.845 [2024-12-08 06:31:55.728011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.845 [2024-12-08 06:31:55.728337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-12-08 06:31:55.728390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.845 [2024-12-08 06:31:55.728403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.845 [2024-12-08 06:31:55.728589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.845 [2024-12-08 06:31:55.728808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.845 [2024-12-08 06:31:55.728828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.845 [2024-12-08 06:31:55.728847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.845 [2024-12-08 06:31:55.728859] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.845 [2024-12-08 06:31:55.741136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.845 [2024-12-08 06:31:55.741487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-12-08 06:31:55.741538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.845 [2024-12-08 06:31:55.741552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.845 [2024-12-08 06:31:55.741764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.845 [2024-12-08 06:31:55.741961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.845 [2024-12-08 06:31:55.741981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.845 [2024-12-08 06:31:55.741994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.845 [2024-12-08 06:31:55.742005] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.845 [2024-12-08 06:31:55.754176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.845 [2024-12-08 06:31:55.754475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-12-08 06:31:55.754528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.845 [2024-12-08 06:31:55.754542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.845 [2024-12-08 06:31:55.754753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.845 [2024-12-08 06:31:55.754950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.845 [2024-12-08 06:31:55.754969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.845 [2024-12-08 06:31:55.754982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.845 [2024-12-08 06:31:55.754994] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.845 [2024-12-08 06:31:55.767206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.845 [2024-12-08 06:31:55.767563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-12-08 06:31:55.767611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.845 [2024-12-08 06:31:55.767625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.845 [2024-12-08 06:31:55.767853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.845 [2024-12-08 06:31:55.768056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.845 [2024-12-08 06:31:55.768076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.845 [2024-12-08 06:31:55.768089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.845 [2024-12-08 06:31:55.768116] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.845 [2024-12-08 06:31:55.780340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.845 [2024-12-08 06:31:55.780663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-12-08 06:31:55.780715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.845 [2024-12-08 06:31:55.780738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.845 [2024-12-08 06:31:55.780945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.845 [2024-12-08 06:31:55.781153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.845 [2024-12-08 06:31:55.781172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.845 [2024-12-08 06:31:55.781184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.845 [2024-12-08 06:31:55.781195] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.845 [2024-12-08 06:31:55.793357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.845 [2024-12-08 06:31:55.793664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.845 [2024-12-08 06:31:55.793689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.845 [2024-12-08 06:31:55.793703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.845 [2024-12-08 06:31:55.793937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.846 [2024-12-08 06:31:55.794152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.846 [2024-12-08 06:31:55.794171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.846 [2024-12-08 06:31:55.794183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.846 [2024-12-08 06:31:55.794195] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.846 [2024-12-08 06:31:55.806473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.846 [2024-12-08 06:31:55.806826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-12-08 06:31:55.806851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.846 [2024-12-08 06:31:55.806865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.846 [2024-12-08 06:31:55.807050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.846 [2024-12-08 06:31:55.807241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.846 [2024-12-08 06:31:55.807260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.846 [2024-12-08 06:31:55.807272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.846 [2024-12-08 06:31:55.807284] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.846 [2024-12-08 06:31:55.819526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.846 [2024-12-08 06:31:55.819867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-12-08 06:31:55.819893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.846 [2024-12-08 06:31:55.819912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.846 [2024-12-08 06:31:55.820098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.846 [2024-12-08 06:31:55.820288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.846 [2024-12-08 06:31:55.820307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.846 [2024-12-08 06:31:55.820319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.846 [2024-12-08 06:31:55.820331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.846 [2024-12-08 06:31:55.832597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.846 [2024-12-08 06:31:55.832942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-12-08 06:31:55.832967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.846 [2024-12-08 06:31:55.832981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.846 [2024-12-08 06:31:55.833167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.846 [2024-12-08 06:31:55.833357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.846 [2024-12-08 06:31:55.833376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.846 [2024-12-08 06:31:55.833388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.846 [2024-12-08 06:31:55.833400] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.846 [2024-12-08 06:31:55.845730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.846 [2024-12-08 06:31:55.846060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-12-08 06:31:55.846085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.846 [2024-12-08 06:31:55.846099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.846 [2024-12-08 06:31:55.846284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.846 [2024-12-08 06:31:55.846474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.846 [2024-12-08 06:31:55.846493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.846 [2024-12-08 06:31:55.846505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.846 [2024-12-08 06:31:55.846516] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.846 [2024-12-08 06:31:55.858870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.846 [2024-12-08 06:31:55.859171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-12-08 06:31:55.859196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.846 [2024-12-08 06:31:55.859211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.846 [2024-12-08 06:31:55.859396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.846 [2024-12-08 06:31:55.859594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.846 [2024-12-08 06:31:55.859613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.846 [2024-12-08 06:31:55.859625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.846 [2024-12-08 06:31:55.859637] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.846 [2024-12-08 06:31:55.872152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.846 [2024-12-08 06:31:55.872491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-12-08 06:31:55.872517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.846 [2024-12-08 06:31:55.872532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.846 [2024-12-08 06:31:55.872754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.846 [2024-12-08 06:31:55.873010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.846 [2024-12-08 06:31:55.873050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.846 [2024-12-08 06:31:55.873066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.846 [2024-12-08 06:31:55.873079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.846 4465.40 IOPS, 17.44 MiB/s [2024-12-08T05:31:55.965Z] [2024-12-08 06:31:55.885398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.846 [2024-12-08 06:31:55.885746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.846 [2024-12-08 06:31:55.885772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.846 [2024-12-08 06:31:55.885786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.846 [2024-12-08 06:31:55.885977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.846 [2024-12-08 06:31:55.886183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.846 [2024-12-08 06:31:55.886203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.846 [2024-12-08 06:31:55.886215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.846 [2024-12-08 06:31:55.886227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.847 [2024-12-08 06:31:55.898508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.847 [2024-12-08 06:31:55.898865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-12-08 06:31:55.898892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.847 [2024-12-08 06:31:55.898906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.847 [2024-12-08 06:31:55.899110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.847 [2024-12-08 06:31:55.899300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.847 [2024-12-08 06:31:55.899319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.847 [2024-12-08 06:31:55.899336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.847 [2024-12-08 06:31:55.899348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.847 [2024-12-08 06:31:55.911672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.847 [2024-12-08 06:31:55.912031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-12-08 06:31:55.912081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.847 [2024-12-08 06:31:55.912095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.847 [2024-12-08 06:31:55.912295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.847 [2024-12-08 06:31:55.912485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.847 [2024-12-08 06:31:55.912504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.847 [2024-12-08 06:31:55.912516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.847 [2024-12-08 06:31:55.912527] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.847 [2024-12-08 06:31:55.924810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.847 [2024-12-08 06:31:55.925140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-12-08 06:31:55.925165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.847 [2024-12-08 06:31:55.925179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.847 [2024-12-08 06:31:55.925364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.847 [2024-12-08 06:31:55.925554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.847 [2024-12-08 06:31:55.925573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.847 [2024-12-08 06:31:55.925585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.847 [2024-12-08 06:31:55.925596] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.847 [2024-12-08 06:31:55.937899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.847 [2024-12-08 06:31:55.938231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-12-08 06:31:55.938256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.847 [2024-12-08 06:31:55.938270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.847 [2024-12-08 06:31:55.938455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.847 [2024-12-08 06:31:55.938645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.847 [2024-12-08 06:31:55.938664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.847 [2024-12-08 06:31:55.938676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.847 [2024-12-08 06:31:55.938687] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:05.847 [2024-12-08 06:31:55.950980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:05.847 [2024-12-08 06:31:55.951309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.847 [2024-12-08 06:31:55.951335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:05.847 [2024-12-08 06:31:55.951349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:05.847 [2024-12-08 06:31:55.951534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:05.847 [2024-12-08 06:31:55.951747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:05.847 [2024-12-08 06:31:55.951767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:05.847 [2024-12-08 06:31:55.951780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:05.847 [2024-12-08 06:31:55.951792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.108 [2024-12-08 06:31:55.964159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.108 [2024-12-08 06:31:55.964487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.108 [2024-12-08 06:31:55.964512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.108 [2024-12-08 06:31:55.964525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.108 [2024-12-08 06:31:55.964711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.108 [2024-12-08 06:31:55.964930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.108 [2024-12-08 06:31:55.964950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.108 [2024-12-08 06:31:55.964963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.108 [2024-12-08 06:31:55.964975] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.108 [2024-12-08 06:31:55.977145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.108 [2024-12-08 06:31:55.977450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.108 [2024-12-08 06:31:55.977475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.108 [2024-12-08 06:31:55.977489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.108 [2024-12-08 06:31:55.977675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.108 [2024-12-08 06:31:55.977911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.108 [2024-12-08 06:31:55.977933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.108 [2024-12-08 06:31:55.977945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.108 [2024-12-08 06:31:55.977957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.108 [2024-12-08 06:31:55.990250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.108 [2024-12-08 06:31:55.990583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.108 [2024-12-08 06:31:55.990612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.109 [2024-12-08 06:31:55.990627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.109 [2024-12-08 06:31:55.990843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.109 [2024-12-08 06:31:55.991060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.109 [2024-12-08 06:31:55.991080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.109 [2024-12-08 06:31:55.991108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.109 [2024-12-08 06:31:55.991121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.109 [2024-12-08 06:31:56.003299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.109 [2024-12-08 06:31:56.003638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.109 [2024-12-08 06:31:56.003663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.109 [2024-12-08 06:31:56.003677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.109 [2024-12-08 06:31:56.003895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.109 [2024-12-08 06:31:56.004110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.109 [2024-12-08 06:31:56.004129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.109 [2024-12-08 06:31:56.004142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.109 [2024-12-08 06:31:56.004153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.109 [2024-12-08 06:31:56.016431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.109 [2024-12-08 06:31:56.016771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.109 [2024-12-08 06:31:56.016797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.109 [2024-12-08 06:31:56.016811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.109 [2024-12-08 06:31:56.016997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.109 [2024-12-08 06:31:56.017188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.109 [2024-12-08 06:31:56.017207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.109 [2024-12-08 06:31:56.017219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.109 [2024-12-08 06:31:56.017231] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.109 [2024-12-08 06:31:56.029502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.109 [2024-12-08 06:31:56.029836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.109 [2024-12-08 06:31:56.029861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.109 [2024-12-08 06:31:56.029875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.109 [2024-12-08 06:31:56.030065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.109 [2024-12-08 06:31:56.030255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.109 [2024-12-08 06:31:56.030274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.109 [2024-12-08 06:31:56.030287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.109 [2024-12-08 06:31:56.030298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.109 [2024-12-08 06:31:56.042583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.109 [2024-12-08 06:31:56.042950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.109 [2024-12-08 06:31:56.042976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.109 [2024-12-08 06:31:56.042990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.109 [2024-12-08 06:31:56.043191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.109 [2024-12-08 06:31:56.043382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.109 [2024-12-08 06:31:56.043401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.109 [2024-12-08 06:31:56.043413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.109 [2024-12-08 06:31:56.043424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.109 [2024-12-08 06:31:56.055664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.109 [2024-12-08 06:31:56.056023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.109 [2024-12-08 06:31:56.056063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.109 [2024-12-08 06:31:56.056076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.109 [2024-12-08 06:31:56.056261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.109 [2024-12-08 06:31:56.056451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.109 [2024-12-08 06:31:56.056471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.109 [2024-12-08 06:31:56.056483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.109 [2024-12-08 06:31:56.056494] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.109 [2024-12-08 06:31:56.068763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.109 [2024-12-08 06:31:56.069095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.109 [2024-12-08 06:31:56.069119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.109 [2024-12-08 06:31:56.069133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.109 [2024-12-08 06:31:56.069318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.109 [2024-12-08 06:31:56.069507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.109 [2024-12-08 06:31:56.069526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.109 [2024-12-08 06:31:56.069544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.109 [2024-12-08 06:31:56.069556] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.109 [2024-12-08 06:31:56.081845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.109 [2024-12-08 06:31:56.082176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.109 [2024-12-08 06:31:56.082201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.109 [2024-12-08 06:31:56.082215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.109 [2024-12-08 06:31:56.082401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.109 [2024-12-08 06:31:56.082591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.109 [2024-12-08 06:31:56.082610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.109 [2024-12-08 06:31:56.082622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.109 [2024-12-08 06:31:56.082633] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.109 [2024-12-08 06:31:56.094915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.109 [2024-12-08 06:31:56.095240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.109 [2024-12-08 06:31:56.095265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.109 [2024-12-08 06:31:56.095279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.109 [2024-12-08 06:31:56.095465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.109 [2024-12-08 06:31:56.095655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.109 [2024-12-08 06:31:56.095674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.109 [2024-12-08 06:31:56.095687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.109 [2024-12-08 06:31:56.095698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.109 [2024-12-08 06:31:56.108062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.109 [2024-12-08 06:31:56.108388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.109 [2024-12-08 06:31:56.108413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.109 [2024-12-08 06:31:56.108427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.109 [2024-12-08 06:31:56.108612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.109 [2024-12-08 06:31:56.108832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.109 [2024-12-08 06:31:56.108854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.109 [2024-12-08 06:31:56.108867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.109 [2024-12-08 06:31:56.108878] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.109 [2024-12-08 06:31:56.121183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.109 [2024-12-08 06:31:56.121514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.110 [2024-12-08 06:31:56.121539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.110 [2024-12-08 06:31:56.121552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.110 [2024-12-08 06:31:56.121762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.110 [2024-12-08 06:31:56.121958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.110 [2024-12-08 06:31:56.121978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.110 [2024-12-08 06:31:56.121991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.110 [2024-12-08 06:31:56.122003] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.110 [2024-12-08 06:31:56.134653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.110 [2024-12-08 06:31:56.135056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.110 [2024-12-08 06:31:56.135083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.110 [2024-12-08 06:31:56.135098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.110 [2024-12-08 06:31:56.135328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.110 [2024-12-08 06:31:56.135544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.110 [2024-12-08 06:31:56.135566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.110 [2024-12-08 06:31:56.135579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.110 [2024-12-08 06:31:56.135592] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.110 [2024-12-08 06:31:56.148119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.110 [2024-12-08 06:31:56.148446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.110 [2024-12-08 06:31:56.148472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.110 [2024-12-08 06:31:56.148487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.110 [2024-12-08 06:31:56.148684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.110 [2024-12-08 06:31:56.148928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.110 [2024-12-08 06:31:56.148950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.110 [2024-12-08 06:31:56.148965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.110 [2024-12-08 06:31:56.148979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.110 [2024-12-08 06:31:56.161957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.110 [2024-12-08 06:31:56.162346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.110 [2024-12-08 06:31:56.162378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.110 [2024-12-08 06:31:56.162394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.110 [2024-12-08 06:31:56.162597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.110 [2024-12-08 06:31:56.162839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.110 [2024-12-08 06:31:56.162862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.110 [2024-12-08 06:31:56.162876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.110 [2024-12-08 06:31:56.162889] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.110 [2024-12-08 06:31:56.175513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.110 [2024-12-08 06:31:56.175867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.110 [2024-12-08 06:31:56.175897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.110 [2024-12-08 06:31:56.175914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.110 [2024-12-08 06:31:56.176155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.110 [2024-12-08 06:31:56.176363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.110 [2024-12-08 06:31:56.176384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.110 [2024-12-08 06:31:56.176398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.110 [2024-12-08 06:31:56.176410] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.110 [2024-12-08 06:31:56.188865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.110 [2024-12-08 06:31:56.189275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.110 [2024-12-08 06:31:56.189325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.110 [2024-12-08 06:31:56.189339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.110 [2024-12-08 06:31:56.189524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.110 [2024-12-08 06:31:56.189741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.110 [2024-12-08 06:31:56.189764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.110 [2024-12-08 06:31:56.189779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.110 [2024-12-08 06:31:56.189792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.110 [2024-12-08 06:31:56.202203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.110 [2024-12-08 06:31:56.202533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.110 [2024-12-08 06:31:56.202584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.110 [2024-12-08 06:31:56.202598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.110 [2024-12-08 06:31:56.202825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.110 [2024-12-08 06:31:56.203057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.110 [2024-12-08 06:31:56.203091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.110 [2024-12-08 06:31:56.203104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.110 [2024-12-08 06:31:56.203116] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.110 [2024-12-08 06:31:56.215821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.110 [2024-12-08 06:31:56.216211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.110 [2024-12-08 06:31:56.216258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.110 [2024-12-08 06:31:56.216274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.110 [2024-12-08 06:31:56.216492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.110 [2024-12-08 06:31:56.216739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.110 [2024-12-08 06:31:56.216762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.110 [2024-12-08 06:31:56.216776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.110 [2024-12-08 06:31:56.216789] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.372 [2024-12-08 06:31:56.229581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.372 [2024-12-08 06:31:56.229949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-12-08 06:31:56.229978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.372 [2024-12-08 06:31:56.229993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.372 [2024-12-08 06:31:56.230223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.372 [2024-12-08 06:31:56.230436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.372 [2024-12-08 06:31:56.230456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.372 [2024-12-08 06:31:56.230469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.372 [2024-12-08 06:31:56.230481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.372 [2024-12-08 06:31:56.243262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.372 [2024-12-08 06:31:56.243628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-12-08 06:31:56.243679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.372 [2024-12-08 06:31:56.243695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.372 [2024-12-08 06:31:56.243933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.372 [2024-12-08 06:31:56.244160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.372 [2024-12-08 06:31:56.244181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.372 [2024-12-08 06:31:56.244199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.372 [2024-12-08 06:31:56.244212] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.372 [2024-12-08 06:31:56.256508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.372 [2024-12-08 06:31:56.256862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-12-08 06:31:56.256891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.372 [2024-12-08 06:31:56.256907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.372 [2024-12-08 06:31:56.257139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.372 [2024-12-08 06:31:56.257329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.372 [2024-12-08 06:31:56.257348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.372 [2024-12-08 06:31:56.257360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.372 [2024-12-08 06:31:56.257372] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.372 [2024-12-08 06:31:56.269896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.372 [2024-12-08 06:31:56.270290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-12-08 06:31:56.270343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.372 [2024-12-08 06:31:56.270357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.372 [2024-12-08 06:31:56.270542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.372 [2024-12-08 06:31:56.270758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.372 [2024-12-08 06:31:56.270796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.372 [2024-12-08 06:31:56.270810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.372 [2024-12-08 06:31:56.270823] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.372 [2024-12-08 06:31:56.283311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.372 [2024-12-08 06:31:56.283645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-12-08 06:31:56.283697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.372 [2024-12-08 06:31:56.283711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.372 [2024-12-08 06:31:56.283940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.372 [2024-12-08 06:31:56.284172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.372 [2024-12-08 06:31:56.284192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.372 [2024-12-08 06:31:56.284204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.372 [2024-12-08 06:31:56.284216] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.372 [2024-12-08 06:31:56.296515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.372 [2024-12-08 06:31:56.296879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-12-08 06:31:56.296933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.372 [2024-12-08 06:31:56.296948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.372 [2024-12-08 06:31:56.297161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.372 [2024-12-08 06:31:56.297352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.372 [2024-12-08 06:31:56.297371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.372 [2024-12-08 06:31:56.297383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.372 [2024-12-08 06:31:56.297394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.372 [2024-12-08 06:31:56.309900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.372 [2024-12-08 06:31:56.310262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-12-08 06:31:56.310314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.372 [2024-12-08 06:31:56.310328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.372 [2024-12-08 06:31:56.310518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.372 [2024-12-08 06:31:56.310737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.372 [2024-12-08 06:31:56.310759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.372 [2024-12-08 06:31:56.310772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.372 [2024-12-08 06:31:56.310785] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.372 [2024-12-08 06:31:56.323151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.372 [2024-12-08 06:31:56.323538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.372 [2024-12-08 06:31:56.323580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.372 [2024-12-08 06:31:56.323594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.372 [2024-12-08 06:31:56.323838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.372 [2024-12-08 06:31:56.324047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.372 [2024-12-08 06:31:56.324067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.372 [2024-12-08 06:31:56.324081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.372 [2024-12-08 06:31:56.324094] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.373 [2024-12-08 06:31:56.336312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.373 [2024-12-08 06:31:56.336681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-12-08 06:31:56.336743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.373 [2024-12-08 06:31:56.336758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.373 [2024-12-08 06:31:56.336951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.373 [2024-12-08 06:31:56.337141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.373 [2024-12-08 06:31:56.337160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.373 [2024-12-08 06:31:56.337172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.373 [2024-12-08 06:31:56.337183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.373 [2024-12-08 06:31:56.349786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.373 [2024-12-08 06:31:56.350147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-12-08 06:31:56.350192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.373 [2024-12-08 06:31:56.350207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.373 [2024-12-08 06:31:56.350399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.373 [2024-12-08 06:31:56.350597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.373 [2024-12-08 06:31:56.350617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.373 [2024-12-08 06:31:56.350629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.373 [2024-12-08 06:31:56.350641] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.373 [2024-12-08 06:31:56.363143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.373 [2024-12-08 06:31:56.363506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-12-08 06:31:56.363548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.373 [2024-12-08 06:31:56.363563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.373 [2024-12-08 06:31:56.363799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.373 [2024-12-08 06:31:56.364030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.373 [2024-12-08 06:31:56.364050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.373 [2024-12-08 06:31:56.364063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.373 [2024-12-08 06:31:56.364090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.373 [2024-12-08 06:31:56.376508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.373 [2024-12-08 06:31:56.376901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-12-08 06:31:56.376938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.373 [2024-12-08 06:31:56.376953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.373 [2024-12-08 06:31:56.377200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.373 [2024-12-08 06:31:56.377396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.373 [2024-12-08 06:31:56.377416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.373 [2024-12-08 06:31:56.377429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.373 [2024-12-08 06:31:56.377440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.373 [2024-12-08 06:31:56.389851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.373 [2024-12-08 06:31:56.390279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-12-08 06:31:56.390335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.373 [2024-12-08 06:31:56.390350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.373 [2024-12-08 06:31:56.390572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.373 [2024-12-08 06:31:56.390843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.373 [2024-12-08 06:31:56.390866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.373 [2024-12-08 06:31:56.390881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.373 [2024-12-08 06:31:56.390894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.373 [2024-12-08 06:31:56.403158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.373 [2024-12-08 06:31:56.403488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-12-08 06:31:56.403539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.373 [2024-12-08 06:31:56.403553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.373 [2024-12-08 06:31:56.403769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.373 [2024-12-08 06:31:56.403978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.373 [2024-12-08 06:31:56.404009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.373 [2024-12-08 06:31:56.404037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.373 [2024-12-08 06:31:56.404049] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.373 [2024-12-08 06:31:56.416605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.373 [2024-12-08 06:31:56.416978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-12-08 06:31:56.417021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.373 [2024-12-08 06:31:56.417051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.373 [2024-12-08 06:31:56.417252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.373 [2024-12-08 06:31:56.417452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.373 [2024-12-08 06:31:56.417472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.373 [2024-12-08 06:31:56.417490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.373 [2024-12-08 06:31:56.417503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.373 [2024-12-08 06:31:56.430071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.373 [2024-12-08 06:31:56.430472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-12-08 06:31:56.430497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.373 [2024-12-08 06:31:56.430512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.373 [2024-12-08 06:31:56.430697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.373 [2024-12-08 06:31:56.430945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.373 [2024-12-08 06:31:56.430968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.373 [2024-12-08 06:31:56.430982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.373 [2024-12-08 06:31:56.430997] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.373 [2024-12-08 06:31:56.443371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.373 [2024-12-08 06:31:56.443767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.373 [2024-12-08 06:31:56.443796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.373 [2024-12-08 06:31:56.443811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.373 [2024-12-08 06:31:56.444034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.373 [2024-12-08 06:31:56.444240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.373 [2024-12-08 06:31:56.444261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.373 [2024-12-08 06:31:56.444273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.373 [2024-12-08 06:31:56.444285] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1175960 Killed "${NVMF_APP[@]}" "$@" 00:28:06.373 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:06.373 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:06.373 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:06.373 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:06.373 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:06.373 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1176925 00:28:06.373 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:06.373 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1176925 00:28:06.374 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1176925 ']' 00:28:06.374 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.374 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:06.374 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.374 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:06.374 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:06.374 [2024-12-08 06:31:56.456745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.374 [2024-12-08 06:31:56.457140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-12-08 06:31:56.457170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.374 [2024-12-08 06:31:56.457187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.374 [2024-12-08 06:31:56.457389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.374 [2024-12-08 06:31:56.457610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.374 [2024-12-08 06:31:56.457630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.374 [2024-12-08 06:31:56.457644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.374 [2024-12-08 06:31:56.457656] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.374 [2024-12-08 06:31:56.470220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.374 [2024-12-08 06:31:56.470567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-12-08 06:31:56.470593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.374 [2024-12-08 06:31:56.470608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.374 [2024-12-08 06:31:56.470832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.374 [2024-12-08 06:31:56.471054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.374 [2024-12-08 06:31:56.471089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.374 [2024-12-08 06:31:56.471101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.374 [2024-12-08 06:31:56.471113] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.374 [2024-12-08 06:31:56.483484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.374 [2024-12-08 06:31:56.483825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.374 [2024-12-08 06:31:56.483853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.374 [2024-12-08 06:31:56.483869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.374 [2024-12-08 06:31:56.484104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.374 [2024-12-08 06:31:56.484302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.374 [2024-12-08 06:31:56.484322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.374 [2024-12-08 06:31:56.484334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.374 [2024-12-08 06:31:56.484351] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.633 [2024-12-08 06:31:56.496743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.633 [2024-12-08 06:31:56.497111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.633 [2024-12-08 06:31:56.497138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.633 [2024-12-08 06:31:56.497152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.633 [2024-12-08 06:31:56.497343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.633 [2024-12-08 06:31:56.497540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.633 [2024-12-08 06:31:56.497559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.633 [2024-12-08 06:31:56.497571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.633 [2024-12-08 06:31:56.497583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.633 [2024-12-08 06:31:56.503123] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:28:06.633 [2024-12-08 06:31:56.503190] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.633 [2024-12-08 06:31:56.510125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.634 [2024-12-08 06:31:56.510470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.634 [2024-12-08 06:31:56.510496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.634 [2024-12-08 06:31:56.510510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.634 [2024-12-08 06:31:56.510714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.634 [2024-12-08 06:31:56.510924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.634 [2024-12-08 06:31:56.510944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.634 [2024-12-08 06:31:56.510958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.634 [2024-12-08 06:31:56.510970] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.634 [2024-12-08 06:31:56.523415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.634 [2024-12-08 06:31:56.523783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.634 [2024-12-08 06:31:56.523810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.634 [2024-12-08 06:31:56.523825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.634 [2024-12-08 06:31:56.524044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.634 [2024-12-08 06:31:56.524255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.634 [2024-12-08 06:31:56.524275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.634 [2024-12-08 06:31:56.524298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.634 [2024-12-08 06:31:56.524310] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.634 [2024-12-08 06:31:56.536809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.634 [2024-12-08 06:31:56.537123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.634 [2024-12-08 06:31:56.537149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.634 [2024-12-08 06:31:56.537164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.634 [2024-12-08 06:31:56.537355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.634 [2024-12-08 06:31:56.537550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.634 [2024-12-08 06:31:56.537569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.634 [2024-12-08 06:31:56.537581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.634 [2024-12-08 06:31:56.537593] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.634 [2024-12-08 06:31:56.550058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.634 [2024-12-08 06:31:56.550373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.634 [2024-12-08 06:31:56.550398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.634 [2024-12-08 06:31:56.550412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.634 [2024-12-08 06:31:56.550604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.634 [2024-12-08 06:31:56.550831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.634 [2024-12-08 06:31:56.550852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.634 [2024-12-08 06:31:56.550865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.634 [2024-12-08 06:31:56.550878] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.634 [2024-12-08 06:31:56.563277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.634 [2024-12-08 06:31:56.563586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.634 [2024-12-08 06:31:56.563612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.634 [2024-12-08 06:31:56.563627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.634 [2024-12-08 06:31:56.563863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.634 [2024-12-08 06:31:56.564100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.634 [2024-12-08 06:31:56.564120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.634 [2024-12-08 06:31:56.564133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.634 [2024-12-08 06:31:56.564145] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.634 [2024-12-08 06:31:56.576578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.634 [2024-12-08 06:31:56.576959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.634 [2024-12-08 06:31:56.576985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.634 [2024-12-08 06:31:56.576999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.634 [2024-12-08 06:31:56.577167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:06.634 [2024-12-08 06:31:56.577206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.634 [2024-12-08 06:31:56.577400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.634 [2024-12-08 06:31:56.577419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.634 [2024-12-08 06:31:56.577432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.634 [2024-12-08 06:31:56.577444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.634 [2024-12-08 06:31:56.589912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.634 [2024-12-08 06:31:56.590387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.634 [2024-12-08 06:31:56.590425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.634 [2024-12-08 06:31:56.590444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.634 [2024-12-08 06:31:56.590644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.634 [2024-12-08 06:31:56.590877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.634 [2024-12-08 06:31:56.590899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.634 [2024-12-08 06:31:56.590915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.634 [2024-12-08 06:31:56.590931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.634 [2024-12-08 06:31:56.603300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.634 [2024-12-08 06:31:56.603661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.634 [2024-12-08 06:31:56.603688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.634 [2024-12-08 06:31:56.603704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.634 [2024-12-08 06:31:56.603943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.634 [2024-12-08 06:31:56.604165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.634 [2024-12-08 06:31:56.604186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.634 [2024-12-08 06:31:56.604200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.634 [2024-12-08 06:31:56.604212] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.634 [2024-12-08 06:31:56.616623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.634 [2024-12-08 06:31:56.616968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.634 [2024-12-08 06:31:56.616996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.634 [2024-12-08 06:31:56.617040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.634 [2024-12-08 06:31:56.617232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.634 [2024-12-08 06:31:56.617429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.634 [2024-12-08 06:31:56.617450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.634 [2024-12-08 06:31:56.617463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.634 [2024-12-08 06:31:56.617476] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.634 [2024-12-08 06:31:56.630028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.634 [2024-12-08 06:31:56.630388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.635 [2024-12-08 06:31:56.630414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.635 [2024-12-08 06:31:56.630429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.635 [2024-12-08 06:31:56.630620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.635 [2024-12-08 06:31:56.630847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.635 [2024-12-08 06:31:56.630870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.635 [2024-12-08 06:31:56.630884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.635 [2024-12-08 06:31:56.630896] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.635 [2024-12-08 06:31:56.633652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.635 [2024-12-08 06:31:56.633685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.635 [2024-12-08 06:31:56.633700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:06.635 [2024-12-08 06:31:56.633735] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:06.635 [2024-12-08 06:31:56.633747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.635 [2024-12-08 06:31:56.635300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:06.635 [2024-12-08 06:31:56.635364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:06.635 [2024-12-08 06:31:56.635368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.635 [2024-12-08 06:31:56.643533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.635 [2024-12-08 06:31:56.644006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.635 [2024-12-08 06:31:56.644043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.635 [2024-12-08 06:31:56.644063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.635 [2024-12-08 06:31:56.644313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.635 [2024-12-08 06:31:56.644549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.635 [2024-12-08 06:31:56.644575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.635 [2024-12-08 06:31:56.644605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.635 [2024-12-08 06:31:56.644625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.635 [2024-12-08 06:31:56.657115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.635 [2024-12-08 06:31:56.657603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.635 [2024-12-08 06:31:56.657640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.635 [2024-12-08 06:31:56.657660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.635 [2024-12-08 06:31:56.657908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.635 [2024-12-08 06:31:56.658145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.635 [2024-12-08 06:31:56.658167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.635 [2024-12-08 06:31:56.658184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.635 [2024-12-08 06:31:56.658200] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.635 [2024-12-08 06:31:56.670638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.635 [2024-12-08 06:31:56.671175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.635 [2024-12-08 06:31:56.671216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.635 [2024-12-08 06:31:56.671237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.635 [2024-12-08 06:31:56.671451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.635 [2024-12-08 06:31:56.671666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.635 [2024-12-08 06:31:56.671687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.635 [2024-12-08 06:31:56.671705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.635 [2024-12-08 06:31:56.671728] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.635 [2024-12-08 06:31:56.684307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.635 [2024-12-08 06:31:56.684791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.635 [2024-12-08 06:31:56.684833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.635 [2024-12-08 06:31:56.684853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.635 [2024-12-08 06:31:56.685089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.635 [2024-12-08 06:31:56.685304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.635 [2024-12-08 06:31:56.685326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.635 [2024-12-08 06:31:56.685343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.635 [2024-12-08 06:31:56.685359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.635 [2024-12-08 06:31:56.697849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.635 [2024-12-08 06:31:56.698321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.635 [2024-12-08 06:31:56.698359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.635 [2024-12-08 06:31:56.698378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.635 [2024-12-08 06:31:56.698591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.635 [2024-12-08 06:31:56.698833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.635 [2024-12-08 06:31:56.698857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.635 [2024-12-08 06:31:56.698875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.635 [2024-12-08 06:31:56.698890] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.635 [2024-12-08 06:31:56.711423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.635 [2024-12-08 06:31:56.711910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.635 [2024-12-08 06:31:56.711951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.635 [2024-12-08 06:31:56.711971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.635 [2024-12-08 06:31:56.712204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.635 [2024-12-08 06:31:56.712445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.635 [2024-12-08 06:31:56.712467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.635 [2024-12-08 06:31:56.712485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.635 [2024-12-08 06:31:56.712500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.635 [2024-12-08 06:31:56.725047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.635 [2024-12-08 06:31:56.725400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.635 [2024-12-08 06:31:56.725428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.635 [2024-12-08 06:31:56.725444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.635 [2024-12-08 06:31:56.725649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.635 [2024-12-08 06:31:56.725886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.635 [2024-12-08 06:31:56.725909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.635 [2024-12-08 06:31:56.725923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.635 [2024-12-08 06:31:56.725936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.635 [2024-12-08 06:31:56.738455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.635 [2024-12-08 06:31:56.738786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.635 [2024-12-08 06:31:56.738816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.635 [2024-12-08 06:31:56.738840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.635 [2024-12-08 06:31:56.739066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.635 [2024-12-08 06:31:56.739275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.635 [2024-12-08 06:31:56.739296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.636 [2024-12-08 06:31:56.739309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.636 [2024-12-08 06:31:56.739321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.898 [2024-12-08 06:31:56.752044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.899 [2024-12-08 06:31:56.752380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.899 [2024-12-08 06:31:56.752407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.899 [2024-12-08 06:31:56.752423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.899 [2024-12-08 06:31:56.752627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.899 [2024-12-08 06:31:56.752865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.899 [2024-12-08 06:31:56.752888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.899 [2024-12-08 06:31:56.752902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.899 [2024-12-08 06:31:56.752914] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.899 [2024-12-08 06:31:56.765603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.899 [2024-12-08 06:31:56.765964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.899 [2024-12-08 06:31:56.765993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.899 [2024-12-08 06:31:56.766008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.899 [2024-12-08 06:31:56.766226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.899 [2024-12-08 06:31:56.766434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.899 [2024-12-08 06:31:56.766455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.899 [2024-12-08 06:31:56.766468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.899 [2024-12-08 06:31:56.766480] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.899 [2024-12-08 06:31:56.779318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.899 [2024-12-08 06:31:56.779736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.899 [2024-12-08 06:31:56.779766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.899 [2024-12-08 06:31:56.779782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.899 [2024-12-08 06:31:56.779999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.899 [2024-12-08 06:31:56.780234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.899 [2024-12-08 06:31:56.780256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.899 [2024-12-08 06:31:56.780270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.899 [2024-12-08 06:31:56.780283] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.899 [2024-12-08 06:31:56.792975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.899 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.899 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:06.899 [2024-12-08 06:31:56.793396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.899 [2024-12-08 06:31:56.793426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.899 [2024-12-08 06:31:56.793442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.899 [2024-12-08 06:31:56.793645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.899 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:06.899 [2024-12-08 06:31:56.793889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.899 [2024-12-08 06:31:56.793914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.899 [2024-12-08 06:31:56.793928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.899 [2024-12-08 06:31:56.793942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.899 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:06.899 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:06.899 [2024-12-08 06:31:56.806508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.899 [2024-12-08 06:31:56.806888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.899 [2024-12-08 06:31:56.806917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.899 [2024-12-08 06:31:56.806934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.899 [2024-12-08 06:31:56.807157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.899 [2024-12-08 06:31:56.807365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.899 [2024-12-08 06:31:56.807386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.899 [2024-12-08 06:31:56.807399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.899 [2024-12-08 06:31:56.807412] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.899 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.899 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:06.899 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.899 [2024-12-08 06:31:56.820111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.899 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:06.899 [2024-12-08 06:31:56.820579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.899 [2024-12-08 06:31:56.820608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.899 [2024-12-08 06:31:56.820629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.899 [2024-12-08 06:31:56.820881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.899 [2024-12-08 06:31:56.821128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.899 [2024-12-08 06:31:56.821150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.899 [2024-12-08 06:31:56.821164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.899 [2024-12-08 06:31:56.821177] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.899 [2024-12-08 06:31:56.826687] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.899 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.899 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:06.899 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.899 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:06.899 [2024-12-08 06:31:56.833595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.899 [2024-12-08 06:31:56.833997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.899 [2024-12-08 06:31:56.834025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.899 [2024-12-08 06:31:56.834056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.899 [2024-12-08 06:31:56.834259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.899 [2024-12-08 06:31:56.834465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.899 [2024-12-08 06:31:56.834486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.899 [2024-12-08 06:31:56.834499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.899 [2024-12-08 06:31:56.834512] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.899 [2024-12-08 06:31:56.847172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.899 [2024-12-08 06:31:56.847644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.899 [2024-12-08 06:31:56.847673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.899 [2024-12-08 06:31:56.847690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.899 [2024-12-08 06:31:56.847938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.899 [2024-12-08 06:31:56.848189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.899 [2024-12-08 06:31:56.848211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.899 [2024-12-08 06:31:56.848225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.899 [2024-12-08 06:31:56.848247] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.899 [2024-12-08 06:31:56.860808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.899 [2024-12-08 06:31:56.861196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.899 [2024-12-08 06:31:56.861223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.899 [2024-12-08 06:31:56.861244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.900 [2024-12-08 06:31:56.861447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.900 [2024-12-08 06:31:56.861654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.900 [2024-12-08 06:31:56.861676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.900 [2024-12-08 06:31:56.861689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.900 [2024-12-08 06:31:56.861702] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.900 Malloc0 00:28:06.900 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.900 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:06.900 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.900 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:06.900 [2024-12-08 06:31:56.874354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.900 [2024-12-08 06:31:56.874820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.900 [2024-12-08 06:31:56.874865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.900 [2024-12-08 06:31:56.874884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.900 [2024-12-08 06:31:56.875114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.900 [2024-12-08 06:31:56.875328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.900 [2024-12-08 06:31:56.875350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.900 [2024-12-08 06:31:56.875365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.900 [2024-12-08 06:31:56.875379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.900 3721.17 IOPS, 14.54 MiB/s [2024-12-08T05:31:57.019Z] 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.900 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:06.900 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.900 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:06.900 [2024-12-08 06:31:56.887873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.900 [2024-12-08 06:31:56.888284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.900 [2024-12-08 06:31:56.888312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baec60 with addr=10.0.0.2, port=4420 00:28:06.900 [2024-12-08 06:31:56.888327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baec60 is same with the state(6) to be set 00:28:06.900 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.900 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:06.900 [2024-12-08 06:31:56.888545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baec60 (9): Bad file descriptor 00:28:06.900 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.900 [2024-12-08 06:31:56.888787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.900 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:06.900 [2024-12-08 06:31:56.888811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.900 [2024-12-08 06:31:56.888826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.900 [2024-12-08 06:31:56.888840] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.900 [2024-12-08 06:31:56.892408] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.900 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.900 06:31:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1176176 00:28:06.900 [2024-12-08 06:31:56.901490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.158 [2024-12-08 06:31:57.043691] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:09.031 4213.86 IOPS, 16.46 MiB/s [2024-12-08T05:32:00.085Z] 4793.50 IOPS, 18.72 MiB/s [2024-12-08T05:32:01.026Z] 5229.56 IOPS, 20.43 MiB/s [2024-12-08T05:32:01.962Z] 5588.70 IOPS, 21.83 MiB/s [2024-12-08T05:32:02.900Z] 5883.27 IOPS, 22.98 MiB/s [2024-12-08T05:32:04.275Z] 6137.58 IOPS, 23.97 MiB/s [2024-12-08T05:32:05.226Z] 6356.62 IOPS, 24.83 MiB/s [2024-12-08T05:32:06.163Z] 6543.57 IOPS, 25.56 MiB/s 00:28:16.044 Latency(us) 00:28:16.044 [2024-12-08T05:32:06.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.044 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:16.044 Verification LBA range: start 0x0 length 0x4000 00:28:16.044 Nvme1n1 : 15.01 6699.34 26.17 10253.37 0.00 7528.24 782.79 76118.85 00:28:16.044 [2024-12-08T05:32:06.163Z] =================================================================================================================== 00:28:16.044 [2024-12-08T05:32:06.163Z] Total : 6699.34 26.17 10253.37 0.00 7528.24 782.79 76118.85 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:16.044 rmmod nvme_tcp 00:28:16.044 rmmod nvme_fabrics 00:28:16.044 rmmod nvme_keyring 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1176925 ']' 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1176925 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1176925 ']' 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1176925 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:16.044 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:16.303 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1176925 00:28:16.303 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:16.303 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:16.303 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1176925' 00:28:16.303 killing process with pid 1176925 00:28:16.303 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1176925 00:28:16.303 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1176925 00:28:16.564 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:16.564 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:16.564 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:16.564 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:16.564 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:16.564 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:16.564 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:16.564 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:16.564 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:16.564 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.564 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.564 06:32:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.471 06:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:18.471 00:28:18.471 real 0m22.662s 00:28:18.471 user 1m0.234s 00:28:18.471 sys 0m4.548s 00:28:18.471 06:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:18.471 06:32:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:18.471 ************************************ 00:28:18.471 END TEST nvmf_bdevperf 00:28:18.471 ************************************ 00:28:18.471 06:32:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:18.472 06:32:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:18.472 06:32:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:18.472 06:32:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.472 ************************************ 00:28:18.472 START TEST nvmf_target_disconnect 00:28:18.472 ************************************ 00:28:18.472 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:18.472 * Looking for test storage... 00:28:18.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:18.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.731 --rc genhtml_branch_coverage=1 00:28:18.731 --rc genhtml_function_coverage=1 00:28:18.731 --rc genhtml_legend=1 00:28:18.731 --rc geninfo_all_blocks=1 00:28:18.731 --rc geninfo_unexecuted_blocks=1 00:28:18.731 00:28:18.731 ' 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:18.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.731 --rc genhtml_branch_coverage=1 00:28:18.731 --rc genhtml_function_coverage=1 00:28:18.731 --rc genhtml_legend=1 00:28:18.731 --rc geninfo_all_blocks=1 00:28:18.731 --rc geninfo_unexecuted_blocks=1 00:28:18.731 00:28:18.731 ' 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:18.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.731 --rc genhtml_branch_coverage=1 00:28:18.731 --rc genhtml_function_coverage=1 00:28:18.731 --rc genhtml_legend=1 00:28:18.731 --rc geninfo_all_blocks=1 00:28:18.731 --rc geninfo_unexecuted_blocks=1 00:28:18.731 00:28:18.731 ' 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:18.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.731 --rc genhtml_branch_coverage=1 00:28:18.731 --rc genhtml_function_coverage=1 00:28:18.731 --rc genhtml_legend=1 00:28:18.731 --rc geninfo_all_blocks=1 00:28:18.731 --rc geninfo_unexecuted_blocks=1 00:28:18.731 00:28:18.731 ' 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:18.731 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:18.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:18.732 06:32:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:21.265 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.265 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:21.266 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:21.266 Found net devices under 0000:84:00.0: cvl_0_0 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:21.266 Found net devices under 0000:84:00.1: cvl_0_1 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:21.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:28:21.266 00:28:21.266 --- 10.0.0.2 ping statistics --- 00:28:21.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.266 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:28:21.266 00:28:21.266 --- 10.0.0.1 ping statistics --- 00:28:21.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.266 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:21.266 ************************************ 00:28:21.266 START TEST nvmf_target_disconnect_tc1 00:28:21.266 ************************************ 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:21.266 06:32:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:21.266 [2024-12-08 06:32:11.087516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.266 [2024-12-08 06:32:11.087573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6aa570 with addr=10.0.0.2, port=4420 00:28:21.267 [2024-12-08 06:32:11.087605] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:21.267 [2024-12-08 06:32:11.087631] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:21.267 [2024-12-08 06:32:11.087652] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:21.267 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:21.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:21.267 Initializing NVMe Controllers 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:21.267 00:28:21.267 real 0m0.101s 00:28:21.267 user 0m0.050s 00:28:21.267 sys 0m0.051s 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:21.267 ************************************ 00:28:21.267 END TEST nvmf_target_disconnect_tc1 00:28:21.267 ************************************ 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:21.267 ************************************ 00:28:21.267 START TEST nvmf_target_disconnect_tc2 00:28:21.267 ************************************ 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1180091 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1180091 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1180091 ']' 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.267 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.267 [2024-12-08 06:32:11.203352] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:28:21.267 [2024-12-08 06:32:11.203427] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.267 [2024-12-08 06:32:11.276851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:21.267 [2024-12-08 06:32:11.335189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.267 [2024-12-08 06:32:11.335252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.267 [2024-12-08 06:32:11.335279] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.267 [2024-12-08 06:32:11.335290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.267 [2024-12-08 06:32:11.335299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.267 [2024-12-08 06:32:11.336892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:21.267 [2024-12-08 06:32:11.336954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:21.267 [2024-12-08 06:32:11.337019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:21.267 [2024-12-08 06:32:11.337022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.526 Malloc0 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.526 [2024-12-08 06:32:11.510355] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.526 [2024-12-08 06:32:11.538608] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1180128 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:21.526 06:32:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:23.434 06:32:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1180091 00:28:23.717 06:32:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Write completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Write completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Write completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Write completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Write completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Write completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Write completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Write completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Write completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Write completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 [2024-12-08 06:32:13.567779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.717 Read completed with error (sct=0, sc=8) 00:28:23.717 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 [2024-12-08 06:32:13.568116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 [2024-12-08 06:32:13.568461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Write completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.718 Read completed with error (sct=0, sc=8) 00:28:23.718 starting I/O failed 00:28:23.719 Read completed with error (sct=0, sc=8) 00:28:23.719 starting I/O failed 00:28:23.719 Read completed with error (sct=0, sc=8) 00:28:23.719 starting I/O failed 00:28:23.719 Write completed with error (sct=0, sc=8) 00:28:23.719 starting I/O failed 00:28:23.719 Read completed with error (sct=0, sc=8) 00:28:23.719 starting I/O failed 00:28:23.719 Read completed with error (sct=0, sc=8) 00:28:23.719 starting I/O failed 00:28:23.719 Read completed with error (sct=0, sc=8) 00:28:23.719 starting I/O failed 00:28:23.719 Read completed with error (sct=0, sc=8) 00:28:23.719 starting I/O failed 00:28:23.719 Read completed with error (sct=0, sc=8) 00:28:23.719 starting I/O failed 00:28:23.719 Write completed with error (sct=0, sc=8) 00:28:23.719 starting I/O failed 00:28:23.719 [2024-12-08 06:32:13.568776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:23.719 [2024-12-08 06:32:13.568945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.568995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.569140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.569166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.569310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.569335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.569520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.569547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.569654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.569678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.569828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.569862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.569988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.570034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.570261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.570285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.570403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.570427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.570562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.570587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.570735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.570763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.570864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.570891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.571029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.571054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.571229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.571253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.571395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.571419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.571609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.571634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.571799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.571827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.571955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.571983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.572086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.572110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.572245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.572270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.572457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.572482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.572633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.572658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.572836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.719 [2024-12-08 06:32:13.572885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.719 qpair failed and we were unable to recover it. 00:28:23.719 [2024-12-08 06:32:13.573034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.573088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.573242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.573267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.573377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.573401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.573559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.573583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.573735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.573762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.573907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.573933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.574024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.574063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.574182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.574207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.574320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.574346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.574468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.574493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.574651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.574677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.574794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.574823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.574923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.574949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.575081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.575106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.575236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.575275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.575434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.575459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.575618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.575643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.575768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.575796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.575926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.575953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.576093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.576118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.576249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.576289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.576390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.576414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.576552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.576581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.576735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.576776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.576876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.576904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.577046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.577072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.577216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.577239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.720 [2024-12-08 06:32:13.577370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.720 [2024-12-08 06:32:13.577394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.720 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.577490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.577516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.577603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.577629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.577774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.577814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.577944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.577971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.578136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.578160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.578305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.578329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.578452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.578477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.578634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.578660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.578802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.578829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.578923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.578948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.579047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.579072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.579217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.579268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.579455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.579480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.579599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.579637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.579749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.579787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.579909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.579937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.580053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.580079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.580256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.580281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.580488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.580513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.580649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.580673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.580815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.580842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.580962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.581009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.581140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.581178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.581338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.581399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.581577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.581602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.581731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.581771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.581893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.581920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.582018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.721 [2024-12-08 06:32:13.582043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.721 qpair failed and we were unable to recover it. 00:28:23.721 [2024-12-08 06:32:13.582186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.582211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.582363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.582389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.582631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.582672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.582800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.582826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.582942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.582966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.583059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.583083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.583199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.583223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.583352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.583403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.583544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.583568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.583733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.583772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.583900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.583927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.584040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.584066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.584204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.584243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.584395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.584420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.584548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.584574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.584694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.584743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.584866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.584892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.584984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.585023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.585149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.585174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.585309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.585333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.585466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.585496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.585612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.585638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.585760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.585798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.585922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.585948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.586068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.586093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.586192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.586216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.586356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.586380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.586531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.722 [2024-12-08 06:32:13.586556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.722 qpair failed and we were unable to recover it. 00:28:23.722 [2024-12-08 06:32:13.586672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.586696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.586884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.586910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.587033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.587073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.587215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.587238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.587390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.587414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.587515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.587540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.587676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.587700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.587843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.587869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.588000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.588040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.588160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.588183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.588309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.588333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.588457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.588481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.588604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.588627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.588744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.588770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.588864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.588890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.589043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.589068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.589220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.589243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.589371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.589395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.589549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.589573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.589718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.589752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.589867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.589892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.589984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.723 [2024-12-08 06:32:13.590023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.723 qpair failed and we were unable to recover it. 00:28:23.723 [2024-12-08 06:32:13.590162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.590200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.590339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.590377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.590517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.590541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.590668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.590693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.590821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.590847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.590945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.590970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.591072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.591096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.591222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.591246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.591388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.591412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.591566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.591591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.591752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.591795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.591900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.591938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.592063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.592090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.592225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.592264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.592396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.592421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.592548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.592574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.592672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.592697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.592839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.592865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.592960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.592986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.593144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.593182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.593330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.593354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.593510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.593535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.593702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.593740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.593878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.593904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.594079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.594103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.594213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.594241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.594365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.594390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.594545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.724 [2024-12-08 06:32:13.594570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.724 qpair failed and we were unable to recover it. 00:28:23.724 [2024-12-08 06:32:13.594745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.594793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.594943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.594981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.595170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.595212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.595364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.595404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.595553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.595610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.595735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.595761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.595882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.595906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.595995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.596034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.596177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.596215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.596356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.596389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.596491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.596515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.596610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.596635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.596771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.596811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.596972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.597028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.597148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.597174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.597291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.597315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.597458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.597482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.597663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.597688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.597854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.597880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.598028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.598052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.598230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.598253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.598366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.598404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.598530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.598554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.598744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.598770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.598888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.598913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.599036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.599075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.599290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.599320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.599559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.599606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.599747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.725 [2024-12-08 06:32:13.599773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.725 qpair failed and we were unable to recover it. 00:28:23.725 [2024-12-08 06:32:13.599930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.599966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.600140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.600164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.600374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.600426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.600597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.600621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.600768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.600795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.600917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.600943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.601064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.601106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.601206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.601256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.601431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.601455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.601589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.601613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.601732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.601757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.601868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.601893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.602016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.602040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.602153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.602177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.602308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.602332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.602442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.602467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.602625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.602663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.602839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.602879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.603058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.603084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.603242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.603267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.603485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.603510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.603648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.603673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.603764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.603790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.604030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.604053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.604176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.604220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.604324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.604348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.604486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.604510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.604615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.604639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.726 [2024-12-08 06:32:13.604779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.726 [2024-12-08 06:32:13.604819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.726 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.605042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.605083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.605187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.605236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.605353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.605400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.605540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.605593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.605749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.605775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.605909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.605940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.606083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.606108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.606215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.606255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.606356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.606381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.606506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.606530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.606642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.606667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.606748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.606774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.606861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.606886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.606998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.607023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.607176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.607215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.607334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.607359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.607485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.607511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.607594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.607619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.607790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.607829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.607956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.607983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.608092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.608117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.608244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.608268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.608406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.608442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.608554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.608579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.608688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.608734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.608866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.608892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.608989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.609028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.609181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.609206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.727 [2024-12-08 06:32:13.609341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.727 [2024-12-08 06:32:13.609367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.727 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.609454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.609479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.609608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.609644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.609775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.609803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.609895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.609922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.610060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.610110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.610287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.610337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.610462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.610506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.610641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.610665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.610803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.610830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.610969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.610995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.611150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.611174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.611329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.611377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.611483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.611523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.611676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.611699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.611863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.611889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.612025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.612050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.612227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.612255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.612353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.612377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.612578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.612618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.612738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.612763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.612921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.612946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.613053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.613095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.613224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.613262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.613408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.613432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.613655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.613691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.613863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.613889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.613988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.728 [2024-12-08 06:32:13.614027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.728 qpair failed and we were unable to recover it. 00:28:23.728 [2024-12-08 06:32:13.614173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.614212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.614404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.614427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.614574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.614598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.614733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.614759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.614886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.614911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.615034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.615058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.615196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.615233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.615403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.615438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.615580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.615606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.615778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.615819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.615978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.616003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.616137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.616176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.616294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.616318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.616476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.616500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.616630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.616655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.616752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.616778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.616925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.616955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.617096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.617120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.617233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.617285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.617447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.617494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.617635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.617671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.617783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.617808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.617952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.617977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.618158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.618218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.618359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.618410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.618580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.618604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.618757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.618797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.729 [2024-12-08 06:32:13.618943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.729 [2024-12-08 06:32:13.618967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.729 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.619162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.619206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.619346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.619397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.619526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.619551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.619630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.619654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.619780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.619805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.620009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.620032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.620201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.620246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.620397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.620430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.620589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.620627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.620801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.620827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.620984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.621008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.621172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.621195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.621369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.621402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.621567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.621590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.621820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.621854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.621969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.622009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.622145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.622190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.622339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.622362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.622553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.622577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.622748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.622773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.622976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.623027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.623153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.623200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.623378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.623401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.730 qpair failed and we were unable to recover it. 00:28:23.730 [2024-12-08 06:32:13.623570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.730 [2024-12-08 06:32:13.623603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.623747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.623772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.623891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.623916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.624093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.624117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.624323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.624374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.624530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.624557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.624714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.624744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.624907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.624933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.625085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.625108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.625302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.625355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.625490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.625526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.625650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.625675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.625830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.625855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.626037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.626061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.626230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.626253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.626486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.626541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.626704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.626749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.626924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.626948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.627036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.627076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.627226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.627264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.627479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.627533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.627678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.627701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.627884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.627910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.628006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.628052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.628209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.628247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.628408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.628432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.628611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.628635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.628806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.628832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.628998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.731 [2024-12-08 06:32:13.629037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.731 qpair failed and we were unable to recover it. 00:28:23.731 [2024-12-08 06:32:13.629234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.629285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.629450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.629474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.629625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.629649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.629770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.629796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.629928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.629952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.630083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.630122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.630314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.630363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.630516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.630540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.630752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.630788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.630937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.630961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.631146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.631169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.631319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.631367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.631522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.631546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.631683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.631708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.631886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.631911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.632085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.632136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.632287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.632315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.632502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.632525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.632698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.632743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.632878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.632903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.633078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.633123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.633256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.633317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.633464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.633516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.633641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.633665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.633777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.633803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.633924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.633956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.634118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.634142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.634313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.634336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.634507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.634541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.634667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.732 [2024-12-08 06:32:13.634704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.732 qpair failed and we were unable to recover it. 00:28:23.732 [2024-12-08 06:32:13.634865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.634905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.635093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.635131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.635278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.635327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.635518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.635553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.635718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.635749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.635896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.635948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.636108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.636179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.636320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.636344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.636509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.636533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.636728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.636754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.636900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.636950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.637135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.637189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.637338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.637390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.637562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.637586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.637753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.637779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.637945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.637996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.638137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.638189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.638334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.638387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.638513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.638552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.638712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.638758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.638924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.638959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.639139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.639171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.639334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.639357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.639630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.639654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.639765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.639790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.639951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.640002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.640163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.640220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.733 [2024-12-08 06:32:13.640423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.733 [2024-12-08 06:32:13.640474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.733 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.640606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.640629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.640762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.640787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.640930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.640984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.641163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.641228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.641376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.641435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.641568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.641605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.641750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.641775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.641948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.642005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.642164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.642223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.642352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.642375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.642503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.642527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.642696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.642742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.642905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.642957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.643057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.643124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.643299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.643356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.643509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.643532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.643687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.643732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.643876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.643916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.643999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.644023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.644174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.644227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.644403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.644459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.644600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.644624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.644761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.644788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.644923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.644972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.645158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.645208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.645358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.645382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.645628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.645662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.645867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.645919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.734 qpair failed and we were unable to recover it. 00:28:23.734 [2024-12-08 06:32:13.646051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.734 [2024-12-08 06:32:13.646102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.646260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.646304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.646501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.646536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.646681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.646704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.646861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.646901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.647036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.647061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.647232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.647256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.647509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.647544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.647709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.647738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.647951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.648008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.648143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.648194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.648413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.648446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.648615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.648638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.648792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.648817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.649004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.649065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.649371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.649428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.649556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.649579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.649783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.649852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.650028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.650100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.650251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.650303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.650496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.650531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.650793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.650818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.651090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.651139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.651314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.651365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.651474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.651512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.651665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.651689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.651860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.651913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.652072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.652129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.652296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.735 [2024-12-08 06:32:13.652349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.735 qpair failed and we were unable to recover it. 00:28:23.735 [2024-12-08 06:32:13.652494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.652517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.652729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.652766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.652944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.652969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.653127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.653165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.653283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.653349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.653544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.653567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.653786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.653811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.653994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.654046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.654185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.654239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.654379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.654434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.654606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.654634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.654784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.654852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.655019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.655054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.655210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.655234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.655468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.655524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.655658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.655681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.655836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.655886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.656066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.656117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.656281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.656326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.656475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.656498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.656604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.656629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.656764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.736 [2024-12-08 06:32:13.656793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.736 qpair failed and we were unable to recover it. 00:28:23.736 [2024-12-08 06:32:13.656954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.656979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.657108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.657132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.657366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.657404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.657556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.657579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.657707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.657753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.657878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.657904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.658093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.658116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.658232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.658255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.658400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.658424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.658621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.658644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.658857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.658902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.659047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.659099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.659251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.659304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.659478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.659511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.659670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.659693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.659878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.659931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.660079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.660139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.660296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.660350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.660497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.660521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.660643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.660667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.660829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.660855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.661030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.661055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.661262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.661285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.661480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.661503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.661692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.661716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.661943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.661995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.662131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.737 [2024-12-08 06:32:13.662185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.737 qpair failed and we were unable to recover it. 00:28:23.737 [2024-12-08 06:32:13.662329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.662382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.662514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.662553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.662697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.662754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.662906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.662954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.663135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.663158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.663290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.663329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.663429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.663453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.663628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.663652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.663759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.663783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.663944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.663996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.664190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.664231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.664409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.664460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.664598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.664637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.664776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.664801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.664994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.665048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.665211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.665297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.665484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.665518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.665667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.665697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.665935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.665988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.666166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.666220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.666420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.666471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.666641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.666676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.666899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.666948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.667084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.667134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.667329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.667378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.667510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.667534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.667697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.667753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.667914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.667976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.668125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.668190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.668327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.738 [2024-12-08 06:32:13.668350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.738 qpair failed and we were unable to recover it. 00:28:23.738 [2024-12-08 06:32:13.668554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.668581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.668695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.668718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.668881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.668905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.669113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.669148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.669329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.669353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.669502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.669525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.669770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.669796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.669938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.669989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.670134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.670185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.670384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.670438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.670581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.670616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.670800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.670860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.671141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.671179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.671283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.671306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.671520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.671568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.671753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.671777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.671935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.671989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.672177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.672229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.672365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.672399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.672606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.672630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.672833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.672882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.673072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.673121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.673272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.673330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.673554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.673577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.673767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.673796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.674029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.674085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.674232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.674301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.674522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.674551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.739 [2024-12-08 06:32:13.674718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.739 [2024-12-08 06:32:13.674767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.739 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.674939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.675004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.675178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.675228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.675384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.675444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.675665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.675714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.675861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.675920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.676118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.676153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.676313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.676363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.676500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.676523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.676733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.676784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.676911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.676974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.677174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.677226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.677485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.677535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.677705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.677756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.677872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.677896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.678101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.678142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.678275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.678329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.678499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.678553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.678713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.678771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.678998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.679033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.679145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.679169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.679337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.679392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.679558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.679582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.679742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.679766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.679930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.679954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.680054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.680078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.680243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.680281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.680491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.680515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.740 qpair failed and we were unable to recover it. 00:28:23.740 [2024-12-08 06:32:13.680655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.740 [2024-12-08 06:32:13.680679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.680832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.680857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.681007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.681033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.681178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.681217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.681389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.681412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.681608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.681632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.681845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.681897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.682084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.682119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.682270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.682324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.682518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.682541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.682740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.682765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.682924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.682975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.683244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.683295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.683455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.683521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.683657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.683680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.683847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.683915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.684042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.684066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.684222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.684260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.684495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.684530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.684674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.684712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.684943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.684979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.685154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.685177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.685338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.685361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.685554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.685587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.685843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.685896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.686072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.686126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.686289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.686347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.686535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.741 [2024-12-08 06:32:13.686570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.741 qpair failed and we were unable to recover it. 00:28:23.741 [2024-12-08 06:32:13.686743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.686783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.686947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.686997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.687118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.687156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.687361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.687384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.687544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.687568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.687802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.687843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.687981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.688005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.688205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.688228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.688474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.688509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.688643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.688666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.688830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.688880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.689086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.689118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.689313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.689367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.689491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.689514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.689697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.689744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.689897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.689950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.690204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.690330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.690578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.690646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.690861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.690889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.691050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.691115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.691319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.691392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.691649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.691714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.691884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.691911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.742 qpair failed and we were unable to recover it. 00:28:23.742 [2024-12-08 06:32:13.692111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.742 [2024-12-08 06:32:13.692159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.692331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.692401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.692638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.692661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.692809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.692834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.693044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.693114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.693363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.693427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.693639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.693704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.693998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.694023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.694278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.694342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.694561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.694626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.694912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.694937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.695096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.695119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.695305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.695374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.695599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.695663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.695929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.695953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.696099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.696162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.696387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.696452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.696672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.696753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.696927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.696952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.697127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.697190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.697466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.697530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.697755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.697796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.697955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.697984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.698173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.698196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.698352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.698422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.698592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.698663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.698878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.698903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.699080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.743 [2024-12-08 06:32:13.699144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.743 qpair failed and we were unable to recover it. 00:28:23.743 [2024-12-08 06:32:13.699357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.699421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.699651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.699715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.699871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.699895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.700124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.700190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.700391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.700455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.700658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.700738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.700894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.700918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.701065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.701089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.701207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.701274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.701495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.701559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.701771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.701795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.701874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.701897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.702081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.702145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.702359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.702424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.702606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.702671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.702881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.702907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.703029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.703068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.703185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.703252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.703493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.703557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.703777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.703802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.703898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.703922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.704093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.704157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.704378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.704442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.704643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.704706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.704878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.704903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.705060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.705084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.705218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.705268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.705435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.705499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.744 [2024-12-08 06:32:13.705742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.744 [2024-12-08 06:32:13.705798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.744 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.705909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.705933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.706065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.706130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.706375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.706440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.706615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.706678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.706899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.706924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.707026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.707055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.707165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.707189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.707311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.707375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.707596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.707661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.707995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.708020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.708185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.708249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.708484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.708507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.708640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.708705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.708964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.709038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.709352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.709409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.709676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.709762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.710035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.710098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.710360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.710405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.710546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.710610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.710847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.710912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.711173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.711218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.711384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.711459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.711662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.711743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.712057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.712105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.712304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.712377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.712629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.712692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.713014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.713062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.713254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.713317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.745 [2024-12-08 06:32:13.713539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.745 [2024-12-08 06:32:13.713603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.745 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.713839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.713896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.714048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.714111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.714365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.714429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.714695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.714790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.715086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.715149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.715397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.715460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.715700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.715808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.716039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.716102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.716350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.716414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.716620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.716684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.716914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.716972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.717198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.717263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.717613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.717677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.717925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.717988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.718168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.718232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.718466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.718520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.718701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.718792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.719098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.719161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.719503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.719558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.719836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.719902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.720111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.720176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.720460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.720518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.720768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.720828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.721081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.721146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.721464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.721526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.721754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.721819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.722138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.722209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.746 [2024-12-08 06:32:13.722520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.746 [2024-12-08 06:32:13.722579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.746 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.722790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.722856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.723052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.723119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.723449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.723516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.723787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.723854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.724083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.724146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.724381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.724440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.724674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.724749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.725009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.725072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.725376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.725439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.725696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.725773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.726130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.726197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.726441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.726505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.726712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.726788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.727037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.727099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.727325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.727389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.727652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.727716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.727940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.728004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.728237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.728302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.728522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.728585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.728799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.728865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.729104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.729168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.729373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.729436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.729733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.729809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.730067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.730131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.730381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.730454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.730675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.730750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.730967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.731030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.731237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.731308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.747 [2024-12-08 06:32:13.731586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.747 [2024-12-08 06:32:13.731660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.747 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.731895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.731959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.732277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.732346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.732572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.732639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.732852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.732916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.733142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.733205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.733406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.733468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.733688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.733768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.733983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.734053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.734303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.734366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.734635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.734699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.734953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.735017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.735336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.735406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.735622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.735692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.736034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.736099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.736334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.736397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.736596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.736659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.736900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.736966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.737166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.737229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.737455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.737519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.737762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.737827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.738018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.738082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.748 [2024-12-08 06:32:13.738287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.748 [2024-12-08 06:32:13.738359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.748 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.738611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.738674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.738978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.739043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.739221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.739284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.739493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.739556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.739863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.739928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.740113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.740177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.740380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.740451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.740820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.740890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.741173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.741237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.741502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.741565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.741748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.741824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.742079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.742143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.742363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.742430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.742685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.742767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.743011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.743075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.743411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.743480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.743780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.743845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.744032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.744110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.744364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.744428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.744669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.744748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.744982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.745045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.745254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.745328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.745685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.745762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.746115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.746185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.746419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.746483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.749 [2024-12-08 06:32:13.746739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.749 [2024-12-08 06:32:13.746804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.749 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.747130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.747193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.747379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.747444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.747683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.747781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.747996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.748070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.748286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.748361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.748693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.748776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.748967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.749031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.749222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.749285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.749458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.749526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.749738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.749803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.750037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.750101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.750435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.750509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.750834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.750900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.751144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.751206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.751575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.751649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.751927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.751992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.752240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.752314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.752589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.752653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.752902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.752969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.753175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.753237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.753484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.753547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.753904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.753969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.754277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.754352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.754590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.754654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.754936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.755003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.755306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.755377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.755666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.755762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.756054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.756119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.750 qpair failed and we were unable to recover it. 00:28:23.750 [2024-12-08 06:32:13.756327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.750 [2024-12-08 06:32:13.756390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.756626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.756690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.756945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.757009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.757227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.757301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.757511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.757584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.757814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.757879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.758076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.758151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.758375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.758439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.758646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.758710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.758932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.758996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.759369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.759434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.759627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.759701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.759918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.759982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.760357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.760426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.760732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.760796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.761018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.761082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.761290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.761360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.761617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.761680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.762046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.762115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.762346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.762411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.762651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.762714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.762941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.763006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.763192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.763256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.763511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.763574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.763790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.763862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.764114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.764178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.764404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.764475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.764737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.764801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.765012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.765077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.765383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.765447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.765699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.751 [2024-12-08 06:32:13.765781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.751 qpair failed and we were unable to recover it. 00:28:23.751 [2024-12-08 06:32:13.766015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.766089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.766258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.766323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.766508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.766571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.766778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.766849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.767053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.767116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.767322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.767393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.767776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.767853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.768124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.768187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.768470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.768533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.768862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.768927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.769177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.769240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.769412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.769476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.769678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.769767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.770055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.770118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.770350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.770414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.770684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.770760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.771014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.771077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.771393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.771467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.771737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.771802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.771981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.772052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.772259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.772328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.772694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.772793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.773061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.773125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.773476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.773550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.773791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.773856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.774088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.774152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.774341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.774408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.774659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.774745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.775044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.775108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.775413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.752 [2024-12-08 06:32:13.775477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.752 qpair failed and we were unable to recover it. 00:28:23.752 [2024-12-08 06:32:13.775699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.775776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.775996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.776060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.776289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.776362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.776639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.776702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.777020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.777084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.777267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.777339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.777549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.777612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.777874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.777939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.778118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.778182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.778415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.778479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.778746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.778809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.778938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.778964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.779165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.779191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.779333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.779397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.779701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.779793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.780053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.780117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.780350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.780420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.780594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.780665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.780948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.781014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.781202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.781267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.781488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.781551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.781822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.781888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.782113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.782198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.782387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.782459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.782639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.782703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.782927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.782994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.783299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.783362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.783557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.783621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.783954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.784028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.784267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.784329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.784555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.753 [2024-12-08 06:32:13.784619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.753 qpair failed and we were unable to recover it. 00:28:23.753 [2024-12-08 06:32:13.784971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.785044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.785353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.785417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.785666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.785750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.786015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.786079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.786262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.786325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.786609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.786673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.787048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.787123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.787427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.787500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.787787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.787852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.788125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.788190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.788440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.788503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.788697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.788777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.788984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.789055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.789328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.789391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.789637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.789700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.789988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.790053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.790283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.790347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.790576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.790640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.790891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.790956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.791140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.791207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.791449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.791512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.791839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.791905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.792163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.792226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.792481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.792545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.792779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.754 [2024-12-08 06:32:13.792844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.754 qpair failed and we were unable to recover it. 00:28:23.754 [2024-12-08 06:32:13.793008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.793071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.793292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.793355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.793599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.793662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.794007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.794072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.794310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.794373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.794625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.794690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.794953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.795027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.795287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.795350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.795564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.795636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.795951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.796026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.796253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.796316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.796544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.796608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.796932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.796997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.797226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.797290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.797614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.797678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.797921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.797987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.798166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.798229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.798435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.798499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.798870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.798937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.799260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.799323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.799574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.799639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.799889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.799953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.800184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.800248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.800597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.800667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.800908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.800974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.801227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.801292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.801468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.801534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.801765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.801831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.802129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.802193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.802434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.755 [2024-12-08 06:32:13.802496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.755 qpair failed and we were unable to recover it. 00:28:23.755 [2024-12-08 06:32:13.802768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.802834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.803094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.803157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.803520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.803582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.803914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.803980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.804209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.804273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.804450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.804521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.804746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.804811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.805108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.805171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.805486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.805549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.805769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.805845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.806062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.806125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.806333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.806396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.806702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.806783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.807115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.807178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.807542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.807605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.807893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.807973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.808289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.808372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.808587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.808660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.809011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.809077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.809376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.809441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.809652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.809743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.810059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.810125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.810410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.810474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.810703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.810805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.811054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.811118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.811310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.811374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.811615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.811697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.811946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.812027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:23.756 [2024-12-08 06:32:13.812220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-12-08 06:32:13.812285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:23.756 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.812495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.812560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.812797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.812863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.813091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.813156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.813361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.813425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.813655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.813718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.814035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.814104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.814352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.814426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.814688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.814784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.815056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.815128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.815312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.815382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.815568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.815636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.815896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.815973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.816168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.816240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.816579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.816647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.816973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.817076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.817398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.817466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.817700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.817793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.818020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.818087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.818339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.818403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.818753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.818819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.819046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.819111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.819364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.819429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.035 qpair failed and we were unable to recover it. 00:28:24.035 [2024-12-08 06:32:13.819700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.035 [2024-12-08 06:32:13.819795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.820018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.820064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.820223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.820269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.820457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.820515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.820714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.820810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.820988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.821034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.821376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.821447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.821695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.821785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.822089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.822154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.822401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.822466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.822747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.822813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.823027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.823091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.823326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.823391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.823758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.823823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.824118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.824183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.824419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.824484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.824752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.824818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.825055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.825120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.825440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.825511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.825755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.825822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.826048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.826113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.826329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.826393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.826576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.826640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.826921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.826988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.827178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.827244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.827476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.827540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.827785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.827863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.828153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.828227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.828498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.828562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.828814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.828881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.829107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.829174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.829463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.829535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.829781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.829858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.830119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.830183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.830393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.830458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.830645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.830710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.831106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.831170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.831486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.831551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.831902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.831969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.832216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.832280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.832506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.832570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.832768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.832836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.833054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.833118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.833333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.833398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.833694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.833777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.834075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.834139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.834403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.834468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.834761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.834828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.835035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.835101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.835304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.835368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.835585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.835654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.835952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.836018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.836231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.836296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.836503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.836568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.836812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.836879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.837159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.837224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.837539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.837604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.837876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.837943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.838175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.838240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.838477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.838541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.838801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.838867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.839211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.839274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.839490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.839555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.839763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.839832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.840105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.840168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.840412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.840478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.840820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.840899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.841118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.841184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.841416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.841482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.841658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.841748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.842024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.842089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.036 [2024-12-08 06:32:13.842313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.036 [2024-12-08 06:32:13.842378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.036 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.842633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.842707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.843044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.843120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.843372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.843437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.843775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.843840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.844104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.844168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.844484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.844560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.844842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.844908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.845208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.845284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.845523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.845588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.845793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.845861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.846075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.846140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.846367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.846433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.846687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.846766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.847051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.847115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.847357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.847422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.847773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.847839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.848155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.848225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.848535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.848599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.848791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.848862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.849066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.849137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.849453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.849517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.849757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.849824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.850012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.850078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.850281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.850345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.850629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.850694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.850917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.850983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.851281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.851345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.851576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.851642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.851907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.851973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.852291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.852355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.852639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.852704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.852960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.853025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.853268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.853332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.853573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.853638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.853851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.853916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.854138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.854203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.854518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.854583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.854811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.854877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.855127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.855192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.855431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.855496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.855736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.855811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.856129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.856195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.856421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.856486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.856746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.856812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.857067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.857131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.857364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.857429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.857668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.857748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.857983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.858047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.858254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.858319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.858506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.858583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.858760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.858825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.859026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.859091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.859402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.859467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.859717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.859796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.860135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.860200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.860482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.860547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.860761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.860827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.861053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.861118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.861327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.861393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.861643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.861708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.861933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.861997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.862212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.862276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.862452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.862517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.862715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.862798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.863019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.863083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.863345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.863410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.863674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.863757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.864080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.864145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.864383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.864448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.037 [2024-12-08 06:32:13.864648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.037 [2024-12-08 06:32:13.864713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.037 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.865043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.865110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.865324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.865389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.865614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.865680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.866004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.866071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.866289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.866353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.866558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.866623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.866881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.866948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.867187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.867250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.867507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.867571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.867753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.867820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.868024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.868098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.868311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.868376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.868648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.868713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.869083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.869149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.869403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.869468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.869651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.869719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.869998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.870067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.870384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.870450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.870631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.870695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.870973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.871037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.871290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.871355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.871544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.871607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.871794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.871860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.872072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.872099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.872397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.872462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.872717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.872800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.873028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.873093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.873324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.873389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.873635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.873699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.874016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.874082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.874373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.874437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.874680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.874771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.875058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.875122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.875352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.875417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.875767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.875833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.876061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.876125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.876364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.876429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.876776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.876843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.877056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.877132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.877356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.877422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.877654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.877718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.877946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.878023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.878251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.878315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.878538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.878602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.878832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.878899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.879166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.879230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.879536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.879601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.879836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.879903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.880134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.880198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.880404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.880469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.880686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.880794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.881101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.881167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.881429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.881493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.881757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.881824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.882185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.882259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.882461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.882525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.882749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.882815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.883026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.883094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.883292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.883357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.883556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.883621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.038 [2024-12-08 06:32:13.883892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.038 [2024-12-08 06:32:13.883958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.038 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.884269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.884334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.884562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.884632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.884974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.885041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.885354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.885418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.885748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.885815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.886119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.886185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.886398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.886463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.886678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.886763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.886991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.887055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.887272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.887337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.887569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.887634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.887918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.887984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.888244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.888309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.888536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.888601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.888783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.888853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.889066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.889131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.889364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.889429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.889681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.889761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.890058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.890123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.890366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.890430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.890627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.890692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.891016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.891081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.891260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.891335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.891542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.891607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.891825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.891891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.892121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.892186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.892504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.892569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.892886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.892954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.893157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.893228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.893438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.893513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.893718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.893803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.894044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.894108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.894392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.894457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.894688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.894768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.895031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.895096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.895288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.895353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.895557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.895621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.895822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.895890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.896126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.896191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.896409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.896473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.896684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.896778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.897010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.897074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.897388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.897454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.897694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.897777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.898035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.898100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.898289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.898354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.898610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.898675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.898927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.898994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.899303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.899369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.899717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.899809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.900063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.900127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.900308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.900373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.900625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.900690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.901007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.901072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.901301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.901366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.901710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.901799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.902069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.902135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.902377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.902442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.902748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.902813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.903116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.903181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.903499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.903563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.903758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.903832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.904043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.904119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.904335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.904400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.904608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.904672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.904944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.905010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.905249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.905314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.905532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.905598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.039 qpair failed and we were unable to recover it. 00:28:24.039 [2024-12-08 06:32:13.905862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.039 [2024-12-08 06:32:13.905929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.906183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.906261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.906440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.906505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.906755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.906821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.907134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.907200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.907388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.907462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.907643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.907708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.908045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.908110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.908288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.908352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.908559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.908625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.908878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.908944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.909156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.909226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.909537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.909603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.909893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.909960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.910189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.910254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.910470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.910544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.910853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.910920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.911204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.911269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.911493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.911557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.911879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.911946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.912171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.912235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.912461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.912525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.912765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.912831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.913055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.913120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.913316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.913380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.913636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.913701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.914085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.914151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.914467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.914532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.914817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.914886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.915061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.915126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.915322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.915387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.915599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.915664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.915899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.915965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.916195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.916261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.916514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.916577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.916921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.916988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.917229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.917292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.917544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.917608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.917927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.917992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.918233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.918297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.918578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.918643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.918894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.918971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.919231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.919296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.919548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.919612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.919839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.919906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.920156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.920222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.920403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.920467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.920718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.920803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.921026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.921092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.921347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.921411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.921639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.921704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.921999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.922064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.922380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.922445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.922632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.922706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.922932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.923005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.923202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.923267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.923484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.923549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.923802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.923869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.924180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.924245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.924471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.924535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.924789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.924855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.925101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.925166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.925380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.925444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.040 [2024-12-08 06:32:13.925665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.040 [2024-12-08 06:32:13.925754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.040 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.926009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.926075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.926279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.926342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.926548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.926612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.926896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.926964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.927235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.927300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.927529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.927597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.927855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.927922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.928145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.928208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.928525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.928590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.928945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.929021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.929322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.929386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.929753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.929827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.930068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.930134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.930349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.930413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.930647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.930712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.930977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.931043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.931356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.931419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.931642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.931739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.932054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.932121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.932353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.932417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.932643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.932708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.932952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.933018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.933265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.933330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.933610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.933675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.933910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.933975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.934190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.934256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.934489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.934554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.934825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.934892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.935143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.935208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.935439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.935504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.935821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.935887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.936198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.936263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.936554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.936619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.936849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.936916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.937134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.937201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.937445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.937510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.937761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.937828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.938026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.938091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.938271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.938337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.938590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.938655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.938854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.938920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.939156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.939220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.939538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.939604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.939943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.940010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.940290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.940355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.940646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.940712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.941024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.941090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.941440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.941514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.941779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.941846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.942161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.942227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.942540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.942605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.942844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.942910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.943155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.943220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.943442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.943512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.943755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.943822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.944047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.944112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.944296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.944361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.944536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.944617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.944896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.944962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.945189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.945254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.945507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.945573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.945855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.945922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.946136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.946201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.946392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.946457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.946684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.946763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.946998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.947062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.947229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.947294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.947547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.947613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.041 qpair failed and we were unable to recover it. 00:28:24.041 [2024-12-08 06:32:13.947844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.041 [2024-12-08 06:32:13.947910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.948119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.948184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.948425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.948490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.948834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.948909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.949222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.949287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.949630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.949706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.949974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.950039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.950342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.950407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.950595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.950660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.950931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.950996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.951363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.951428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.951753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.951821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.952122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.952186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.952384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.952450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.952631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.952690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.952943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.953007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.953230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.953296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.953548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.953616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.953927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.953993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.954346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.954422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.954655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.954738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.955035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.955100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.955450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.955517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.955828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.955894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.956209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.956275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.956562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.956627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.956942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.957007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.957244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.957310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.957519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.957585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.957825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.957901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.958148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.958213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.958454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.958518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.958819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.958885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.959148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.959214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.959452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.959516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.959757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.959824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.960049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.960115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.960329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.960395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.960608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.960672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.960874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.960940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.961263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.961327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.961639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.961704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.962069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.962145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.962450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.962515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.962799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.962866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.963050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.963116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.963330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.963394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.963625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.963690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.963911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.963977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.964206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.964270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.964478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.964546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.964774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.964839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.965072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.965137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.965390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.965454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.965774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.965839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.966145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.966211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.966484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.966549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.966798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.966863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.967054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.967119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.967333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.967398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.967667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.967743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.967965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.968030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.968248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.968314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.968494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.968558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.968764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.968833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.969072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.969138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.969372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.969436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.969626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.042 [2024-12-08 06:32:13.969690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.042 qpair failed and we were unable to recover it. 00:28:24.042 [2024-12-08 06:32:13.969958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.970023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.970342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.970417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.970693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.970771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.971025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.971091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.971407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.971470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.971700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.971777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.972030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.972094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.972278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.972342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.972548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.972613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.972858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.972924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.973148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.973212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.973466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.973532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.973776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.973843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.974199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.974273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.974578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.974643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.974897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.974964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.975165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.975229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.975460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.975525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.975763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.975829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.976092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.976155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.976507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.976582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.976785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.976852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.977084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.977148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.977450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.977514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.977757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.977825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.978045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.978109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.978298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.978363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.978590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.978655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.978940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.979007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.979222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.979288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.979521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.979586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.979794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.979859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.980064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.980129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.980329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.980394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.980621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.980686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.981030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.981095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.981309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.981377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.981607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.981671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.981897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.981963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.982181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.982246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.982552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.982617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.982835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.982911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.983180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.983244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.983455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.983520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.983786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.983854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.984080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.984144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.984351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.984427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.984643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.984709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.984961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.985026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.985305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.985370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.985609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.985674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.985952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.986016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.986193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.986258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.986463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.986527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.986718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.986797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.987020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.987085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.987294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.987358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.987622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.987687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.987985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.988050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.988255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.988319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.988572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.988638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.988980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.989047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.989269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.989333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.989550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.043 [2024-12-08 06:32:13.989615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.043 qpair failed and we were unable to recover it. 00:28:24.043 [2024-12-08 06:32:13.989929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.989997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.990231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.990295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.990512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.990588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.990916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.990982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.991208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.991284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.991472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.991540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.991769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.991836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.992078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.992144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.992398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.992467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.992714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.992792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.993101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.993166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.993487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.993552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.993774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.993843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.994050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.994115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.994345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.994410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.994612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.994676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.995019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.995085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.995313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.995378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.995624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.995689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.995960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.996025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.996343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.996407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.996646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.996711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.996987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.997052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.997366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.997431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.997654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.997718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.998001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.998066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.998366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.998430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.998632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.998697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.998946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.999011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.999219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.999282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.999457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.999521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:13.999851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:13.999919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.000183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.000248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.000586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.000650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.000896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.000962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.001217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.001282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.001506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.001571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.001774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.001841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.002037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.002102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.002292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.002357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.002586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.002649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.003014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.003082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.003315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.003381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.003662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.003743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.003981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.004057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.004237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.004303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.004503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.004571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.004762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.004832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.005085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.005148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.005466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.005531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.005883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.005959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.006261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.006324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.006591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.006656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.006889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.006956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.007140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.007207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.007412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.007477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.007749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.007815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.008085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.008150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.008384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.008450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.008666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.008747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.009009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.009073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.009280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.009345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.009542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.009606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.009790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.009856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.010107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.010172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.010487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.010551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.010868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.010935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.011255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.011320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.044 [2024-12-08 06:32:14.011543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.044 [2024-12-08 06:32:14.011607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.044 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.011822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.011888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.012103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.012168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.012479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.012544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.012780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.012846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.013058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.013123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.013345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.013410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.013621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.013686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.013991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.014058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.014284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.014352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.014558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.014624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.014820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.014886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.015093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.015157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.015385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.015451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.015662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.015744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.016070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.016134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.016370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.016445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.016665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.016749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.016984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.017049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.017331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.017396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.017628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.017692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.017957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.018022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.018230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.018295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.018500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.018564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.018814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.018881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.019110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.019175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.019380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.019445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.019625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.019700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.019974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.020039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.020356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.020423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.020719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.020798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.021123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.021189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.021379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.021444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.021713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.021803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.022077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.022142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.022380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.022444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.022664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.022747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.023007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.023072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.023261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.023326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.023585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.023650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.023964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.024030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.024375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.024440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.024622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.024686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.024923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.024988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.025244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.025309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.025564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.025628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.025923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.025989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.026311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.026376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.026593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.026665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.026900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.026966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.027169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.027234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.027471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.027535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.027719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.027803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.028037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.028103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.028331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.028395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.028603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.028669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.028941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.029018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.029266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.029330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.029562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.029627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.029880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.029947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.030172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.030237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.030467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.030531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.030781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.030849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.031107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.031173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.031402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.045 [2024-12-08 06:32:14.031466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.045 qpair failed and we were unable to recover it. 00:28:24.045 [2024-12-08 06:32:14.031754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.031821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.032049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.032114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.032320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.032384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.032613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.032678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.032954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.033019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.033208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.033272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.033508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.033573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.033817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.033882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.034165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.034230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.034432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.034497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.034744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.034811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.035041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.035105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.035425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.035490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.035770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.035838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.036064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.036128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.036446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.036512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.036759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.036832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.037030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.037105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.037419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.037484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.037830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.037896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.038169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.038234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.038466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.038531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.038796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.038862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.039066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.039131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.039373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.039438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.039756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.039822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.040102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.040166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.040482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.040547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.040773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.040841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.041117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.041182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.041397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.041462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.041783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.041859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.042083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.042148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.042327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.042392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.042589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.042656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.042997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.043063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.043327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.043392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.043634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.043698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.043944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.044010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.044333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.044399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.044678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.044760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.044978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.045043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.045271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.045335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.045500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.045575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.045824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.045890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.046105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.046170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.046405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.046470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.046752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.046817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.047045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.047110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.047431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.047496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.047811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.047877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.048165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.048230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.048544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.048609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.048897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.048962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.049191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.049256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.049467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.049532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.049719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.049802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.049980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.050045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.050292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.050358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.050656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.050748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.051031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.051107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.051326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.051400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.051602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.051665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.051886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.051959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.052210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.052274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.052485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.052549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.052802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.052868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.053177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.046 [2024-12-08 06:32:14.053241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.046 qpair failed and we were unable to recover it. 00:28:24.046 [2024-12-08 06:32:14.053491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.053556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.053797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.053864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.054093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.054157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.054358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.054434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.054656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.054737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.054995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.055059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.055293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.055357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.055676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.055754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.055973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.056037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.056269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.056333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.056650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.056715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.056960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.057024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.057305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.057369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.057581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.057647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.057930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.057997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.058295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.058361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.058592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.058657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.058928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.058996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.059249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.059314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.059627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.059691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.059938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.060003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.060220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.060284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.060537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.060602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.060916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.060983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.061251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.061316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.061543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.061607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.061837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.061905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.062088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.062157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.062381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.062445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.062705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.062785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.063140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.063217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.063409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.063474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.063864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.063930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.064113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.064177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.064377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.064441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.064676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.064758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.064991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.065055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.065285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.065350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.065665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.065763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.066009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.066074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.066307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.066372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.066539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.066603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.066857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.066924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.067166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.067252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.067545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.067609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.067843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.067909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.068227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.068293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.068496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.068563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.068802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.068868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.069069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.069134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.069393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.069457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.069688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.069768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.069987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.070053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.070219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.070284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.070508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.070573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.070771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.070838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.071077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.071140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.071483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.071548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.071764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.071833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.072107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.072172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.072403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.072468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.072647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.072711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.072932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.073000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.073235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.073300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.073529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.073593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.073854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.073921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.074128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.074197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.074419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.074483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.047 [2024-12-08 06:32:14.074753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.047 [2024-12-08 06:32:14.074820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.047 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.075120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.075185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.075533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.075606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.075813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.075879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.076142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.076207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.076486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.076551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.076789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.076855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.077066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.077130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.077353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.077417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.077669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.077755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.078020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.078087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.078272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.078335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.078564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.078629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.078865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.078932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.079147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.079210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.079433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.079507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.079712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.079804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.080072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.080136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.080365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.080429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.080681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.080763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.081025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.081089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.081403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.081467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.081690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.081783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.082150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.082223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.082470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.082535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.082786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.082852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.083173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.083238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.083452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.083516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.083753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.083820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.084040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.084110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.084341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.084406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.084740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.084806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.085107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.085171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.085419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.085484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.085717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.085796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.086048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.086113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.086365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.086429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.086681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.086779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.086977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.087042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.087269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.087334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.087541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.087606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.087859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.087925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.088167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.088233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.088464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.088529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.088841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.088908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.089247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.089313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.089666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.089745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.089975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.090040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.090247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.090311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.090489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.090557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.090761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.090828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.091088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.091153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.091394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.091459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.091692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.091770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.091983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.092048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.092280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.092356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.092643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.092707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.092960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.093024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.093210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.093274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.093488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.093553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.093798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.093864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.094051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.094119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.094332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.094397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.094629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.094693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.048 [2024-12-08 06:32:14.094905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.048 [2024-12-08 06:32:14.094970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.048 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.095220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.095285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.095532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.095597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.095834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.095900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.096150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.096215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.096529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.096594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.096842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.096915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.097118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.097185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.097412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.097476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.097756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.097822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.098074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.098138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.098317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.098382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.098613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.098678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.098872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.098937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.099150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.099216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.099470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.099535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.099852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.099919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.100100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.100165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.100410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.100475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.100706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.100786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.101010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.101075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.101403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.101467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.101780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.101846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.102133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.102198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.102433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.102497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.102756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.102823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.103095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.103160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.103400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.103464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.103779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.103845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.104016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.104082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.104295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.104359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.104574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.104650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.105054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.105122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.105347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.105411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.105642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.105707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.105956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.106031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.106258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.106322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.106538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.106603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.106848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.106913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.107192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.107257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.107502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.107566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.107791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.107858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.108073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.108139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.108391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.108455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.108688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.108769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.109101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.109167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.109449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.109514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.109816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.109882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.110082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.110153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.110403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.110467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.110790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.110856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.111204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.111270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.111583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.111647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.111912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.111979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.112300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.112365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.112594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.112657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.112907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.112973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.113249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.113315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.113529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.113599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.113810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.113879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.114048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.114114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.114367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.114431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.114751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.114817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.115032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.115096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.115349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.115413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.115642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.115706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.115954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.116019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.116248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.116312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.049 [2024-12-08 06:32:14.116584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.049 [2024-12-08 06:32:14.116647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.049 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.116887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.116954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.117226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.117291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.117470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.117545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.117774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.117841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.118095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.118160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.118399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.118464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.118675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.118762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.119024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.119090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.119294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.119358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.119594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.119658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.119854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.119921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.120174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.120238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.120427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.120495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.120758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.120825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.121072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.121136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.121408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.121473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.121707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.121790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.122088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.122152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.122414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.122478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.122755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.122822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.123048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.123112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.123368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.123433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.123750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.123817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.124066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.124131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.124385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.124449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.124682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.124775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.125043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.125115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.125476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.125548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.125859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.125924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.126162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.126227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.126428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.126504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.126687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.126778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.127037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.127101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.127416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.127482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.127750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.127816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.128107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.128172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.128381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.128446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.128681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.128779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.129076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.129147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.129486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.129550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.129771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.129838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.130060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.130126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.130375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.130458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.130704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.130791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.131052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.131117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.131315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.131378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.131594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.131659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.131892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.131958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.132196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.132261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.132445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.132510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.132714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.132792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.133027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.133091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.133292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.133357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.133569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.133642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.133955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.134021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.134396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.134462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.134827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.134905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.135149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.135214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.135415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.135491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.135822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.135887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.136200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.136264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.136550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.136616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.136928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.136994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.050 [2024-12-08 06:32:14.137236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.050 [2024-12-08 06:32:14.137300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.050 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-08 06:32:14.137590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-08 06:32:14.137655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-08 06:32:14.137854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-08 06:32:14.137922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.051 [2024-12-08 06:32:14.138144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.051 [2024-12-08 06:32:14.138214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.051 qpair failed and we were unable to recover it. 00:28:24.324 [2024-12-08 06:32:14.138491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.324 [2024-12-08 06:32:14.138557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.324 qpair failed and we were unable to recover it. 00:28:24.324 [2024-12-08 06:32:14.138818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.324 [2024-12-08 06:32:14.138886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.324 qpair failed and we were unable to recover it. 00:28:24.324 [2024-12-08 06:32:14.139131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.324 [2024-12-08 06:32:14.139196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.324 qpair failed and we were unable to recover it. 00:28:24.324 [2024-12-08 06:32:14.139522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.324 [2024-12-08 06:32:14.139587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.324 qpair failed and we were unable to recover it. 00:28:24.324 [2024-12-08 06:32:14.139959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.324 [2024-12-08 06:32:14.140024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.324 qpair failed and we were unable to recover it. 00:28:24.324 [2024-12-08 06:32:14.140273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.324 [2024-12-08 06:32:14.140337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.324 qpair failed and we were unable to recover it. 00:28:24.324 [2024-12-08 06:32:14.140537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.324 [2024-12-08 06:32:14.140602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.324 qpair failed and we were unable to recover it. 00:28:24.324 [2024-12-08 06:32:14.140868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.324 [2024-12-08 06:32:14.140936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.141194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.141259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.141554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.141619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.141969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.142043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.142350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.142415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.142767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.142833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.143033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.143098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.143287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.143352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.143572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.143646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.143981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.144048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.144360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.144425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.144705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.144787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.144998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.145063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.145269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.145342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.145557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.145621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.145958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.146025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.146276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.146342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.146609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.146673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.146992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.147059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.147296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.147360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.147582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.147647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.147928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.147995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.148210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.148284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.148532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.148597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.148848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.148915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.149150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.149215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.149535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.149600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.149944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.150011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.150292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.150356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.150562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.150628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.150971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.151038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.151287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.151350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.151663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.151740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.151930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.152001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.152207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.152271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.152509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.152584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.152770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.325 [2024-12-08 06:32:14.152837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.325 qpair failed and we were unable to recover it. 00:28:24.325 [2024-12-08 06:32:14.153156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.153221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.153478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.153543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.153802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.153868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.154046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.154110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.154314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.154379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.154610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.154674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.155008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.155074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.155355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.155419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.155648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.155716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.155955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.156019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.156224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.156288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.156540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.156605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.156850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.156917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.157153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.157218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.157429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.157493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.157713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.157796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.158059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.158124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.158328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.158397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.158654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.158719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.159065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.159130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.159448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.159513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.159864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.159931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.160210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.160274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.160459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.160524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.160843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.160910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.161145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.161210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.161463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.161528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.161781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.161848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.162031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.162095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.162303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.162367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.162570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.162642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.162867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.162932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.163186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.163251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.163593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.163667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.163971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.164037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.164285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.164350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.164682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.164776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.326 [2024-12-08 06:32:14.165048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.326 [2024-12-08 06:32:14.165112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.326 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.165303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.165381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.165615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.165680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.166019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.166083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.166432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.166498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.166748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.166815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.167041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.167106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.167312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.167379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.167542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.167606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.167822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.167888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.168124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.168189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.168474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.168540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.168783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.168849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.169129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.169194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.169522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.169588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.169883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.169949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.170254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.170318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.170517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.170582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.170845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.170911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.171152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.171223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.171465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.171530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.171742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.171809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.172061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.172124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.172390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.172465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.172691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.172775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.173095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.173159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.173385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.173450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.173636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.173702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.173985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.174050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.174367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.174432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.174782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.174849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.175088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.175152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.175356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.175430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.175646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.175710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.175965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.176029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.176332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.176396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.176594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.176662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.176900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.327 [2024-12-08 06:32:14.176965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.327 qpair failed and we were unable to recover it. 00:28:24.327 [2024-12-08 06:32:14.177220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.177285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.177467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.177537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.177755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.177820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.178075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.178151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.178417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.178483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.178739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.178805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.179035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.179110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.179339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.179405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.179687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.179785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.180070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.180136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.180342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.180410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.180616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.180690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.180937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.181002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.181212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.181276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.181594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.181659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.182009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.182075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.182307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.182371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.182751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.182822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.183080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.183145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.183404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.183468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.183782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.183849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.184059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.184133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.184322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.184388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.184631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.184696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.185007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.185071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.185305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.185369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.185611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.185686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.186018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.186081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.186399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.186463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.186749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.186815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.187009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.187077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.187298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.187370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.187599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.187664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.187923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.187990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.188199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.188264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.188519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.188584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.188770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.188838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.328 [2024-12-08 06:32:14.189044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.328 [2024-12-08 06:32:14.189117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.328 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.189348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.189413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.189631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.189696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.189993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.190057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.190266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.190331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.190539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.190612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.190922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.190998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.191249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.191314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.191635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.191700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.192017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.192082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.192396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.192461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.192813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.192880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.193157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.193221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.193408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.193473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.193710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.193787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.193976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.194041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.194261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.194326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.194617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.194683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.195022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.195086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.195400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.195465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.195784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.195850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.196068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.196139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.196452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.196518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.196762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.196827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.197057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.197122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.197312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.197379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.197587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.197651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.197920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.197987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.198245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.198310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.198539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.198603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.198831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.198898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.199209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.199274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.199632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.329 [2024-12-08 06:32:14.199707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.329 qpair failed and we were unable to recover it. 00:28:24.329 [2024-12-08 06:32:14.200007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.200073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.200263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.200332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.200546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.200621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.200897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.200963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.201196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.201261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.201581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.201646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.201963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.202029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.202329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.202394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.202646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.202711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.203027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.203091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.203409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.203473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.203778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.203844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.204099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.204164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.204482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.204558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.204894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.204970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.205204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.205279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.205474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.205539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.205772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.205839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.206045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.206110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.206349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.206413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.206678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.206755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.206989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.207054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.207262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.207334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.207653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.207740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.207909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.207976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.208230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.208294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.208578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.208643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.208860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.208927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.209136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.209207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.209458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.209523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.209846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.209913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.210192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.210256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.210471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.210536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.210789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.210876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.211134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.211199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.211461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.211526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.211838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.211905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.212147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.330 [2024-12-08 06:32:14.212210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.330 qpair failed and we were unable to recover it. 00:28:24.330 [2024-12-08 06:32:14.212476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.212540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.212778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.212845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.213110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.213175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.213419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.213484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.213854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.213921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.214240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.214304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.214587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.214651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.214967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.215035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.215352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.215417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.215737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.215803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.216123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.216188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.216500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.216563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.216888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.216953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.217277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.217342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.217546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.217610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.217891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.217969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.218203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.218267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.218518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.218582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.218805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.218871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.219066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.219130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.219324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.219398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.219628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.219692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.219934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.219998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.220222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.220286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.220474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.220538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.220805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.220870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.221100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.221164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.221418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.221482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.221837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.221904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.222186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.222250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.222482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.222546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.222751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.222818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.223032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.223095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.223348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.223412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.223660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.223750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.223951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.224015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.224270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.224335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.331 [2024-12-08 06:32:14.224523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.331 [2024-12-08 06:32:14.224587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.331 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.224799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.224865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.225062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.225126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.225369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.225435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.225665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.225745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.225992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.226058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.226282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.226348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.226574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.226638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.226885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.226951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.227161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.227227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.227508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.227573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.227863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.227929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.228127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.228193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.228385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.228449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.228712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.228793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.229002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.229068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.229282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.229348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.229649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.229714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.229960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.230046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.230323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.230388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.230644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.230710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.230950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.231015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.231293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.231359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.231658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.231741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.231975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.232040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.232243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.232309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.232597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.232661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.233040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.233107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.233405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.233471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.233786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.233852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.234140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.234205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.234505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.234572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.234846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.234912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.235141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.235205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.235450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.235516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.235859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.235924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.236222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.236288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.236548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.332 [2024-12-08 06:32:14.236615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.332 qpair failed and we were unable to recover it. 00:28:24.332 [2024-12-08 06:32:14.236879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.236944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.237246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.237311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.237564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.237630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.237861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.237927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.238160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.238225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.238533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.238599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.238861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.238927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.239194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.239259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.239570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.239636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.239889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.239955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.240313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.240378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.240634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.240700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.240955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.241021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.241282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.241346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.241645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.241711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.241940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.242006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.242308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.242372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.242669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.242763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.243014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.243084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.243391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.243456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.243771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.243849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.244113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.244179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.244486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.244550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.244836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.244902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.245214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.245280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.245550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.245615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.245893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.245959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.246233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.246298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.246576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.246641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.246923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.246990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.247269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.247334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.247637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.247703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.248031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.248096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.248397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.248462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.248779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.248847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.333 qpair failed and we were unable to recover it. 00:28:24.333 [2024-12-08 06:32:14.249149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.333 [2024-12-08 06:32:14.249216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.249516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.249582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.249836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.249902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.250199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.250264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.250519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.250584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.250879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.250947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.251250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.251315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.251625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.251690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.251941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.252008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.252316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.252381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.252683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.252770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.253025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.253091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.253395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.253461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.253784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.253852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.254142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.254208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.254398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.254464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.254660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.254740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.254953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.255021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.255329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.255395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.255643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.255708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.256011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.256077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.256275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.256339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.256630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.256695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.257006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.257073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.257318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.257382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.257674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.257771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.257983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.258048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.258339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.258404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.258664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.258749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.258997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.259062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.259315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.259379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.259674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.259755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.260040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.260106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.260423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.260487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.260792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.260859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.261166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.334 [2024-12-08 06:32:14.261233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.334 qpair failed and we were unable to recover it. 00:28:24.334 [2024-12-08 06:32:14.261530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.261595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.261848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.261914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.262219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.262284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.262551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.262616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.262927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.262995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.263250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.263315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.263620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.263685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.263965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.264031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.264328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.264395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.264716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.264799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.265079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.265145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.265435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.265501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.265704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.265796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.266097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.266163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.266467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.266533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.266786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.266852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.267136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.267203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.267508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.267574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.267876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.267943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.268193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.268258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.268525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.268590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.268842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.268909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.269217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.269282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.269532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.269603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.269898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.269965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.270275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.270341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.270603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.270669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.271009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.271076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.271338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.271403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.271714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.271807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.272109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.272174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.272436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.272502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.272760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.272827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.273080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.273145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.273454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.273520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.273776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.273843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.274128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.274194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.335 qpair failed and we were unable to recover it. 00:28:24.335 [2024-12-08 06:32:14.274499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.335 [2024-12-08 06:32:14.274565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.274821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.274889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.275136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.275201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.275497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.275563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.275880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.275947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.276257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.276322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.276575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.276641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.276914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.276981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.277286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.277351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.277655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.277750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.278052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.278119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.278416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.278481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.278757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.278825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.279135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.279201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.279510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.279575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.279884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.279950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.280240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.280306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.280612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.280677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.280999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.281066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.281362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.281429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.281764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.281832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.282125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.282192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.282446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.282511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.282804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.282872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.283179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.283246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.283465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.283530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.283834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.283902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.284198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.284262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.284555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.284620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.284884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.284950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.285259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.285323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.285606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.285671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.285990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.286067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.286360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.286425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.286734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.286802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.287107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.287172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.287429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.287493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.287751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.336 [2024-12-08 06:32:14.287819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.336 qpair failed and we were unable to recover it. 00:28:24.336 [2024-12-08 06:32:14.288121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.288187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.288490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.288555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.288860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.288928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.289148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.289214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.289463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.289528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.289782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.289849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.290156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.290221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.290425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.290491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.290805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.290871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.291124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.291189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.291414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.291486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.291811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.291879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.292144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.292210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.292518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.292582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.292882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.292949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.293251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.293318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.293629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.293693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.293989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.294056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.294349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.294415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.294704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.294782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.295104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.295169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.295471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.295538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.295847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.295913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.296184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.296250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.296422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.296487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.296740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.296806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.297041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.297107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.297367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.297433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.297681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.297781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.298093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.298159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.298378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.298444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.298762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.298828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.299073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.299147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.299456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.299523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.299776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.299853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.300158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.300224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.300524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.300590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.337 qpair failed and we were unable to recover it. 00:28:24.337 [2024-12-08 06:32:14.300834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.337 [2024-12-08 06:32:14.300900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.301198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.301264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.301561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.301628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.301942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.302007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.302270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.302335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.302593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.302659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.302881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.302948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.303243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.303309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.303602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.303668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.303980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.304047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.304359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.304425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.304756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.304823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.305019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.305084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.305376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.305442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.305715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.305816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.306088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.306154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.306464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.306529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.306832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.306900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.307167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.307232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.307553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.307617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.307893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.307960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.308261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.308327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.308578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.308643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.308917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.308984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.309237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.309304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.309603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.309667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.309995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.310061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.310327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.310394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.310673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.310756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.311069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.311134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.311424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.311491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.311791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.311858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.312108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.312174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.312481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.312547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.338 qpair failed and we were unable to recover it. 00:28:24.338 [2024-12-08 06:32:14.312847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.338 [2024-12-08 06:32:14.312915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.313219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.313285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.313556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.313621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.313898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.313974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.314278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.314344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.314638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.314704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.315043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.315109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.315337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.315404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.315712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.315797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.316096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.316161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.316415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.316480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.316788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.316859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.317168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.317233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.317509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.317575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.317884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.317952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.318252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.318316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.318654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.318719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.319041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.319107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.319381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.319446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.319702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.319783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.320090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.320156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.320464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.320529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.320803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.320870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.321171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.321237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.321493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.321559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.321778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.321846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.322050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.322117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.322314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.322378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.322600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.322665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.322987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.323055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.323332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.323405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.323701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.323782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.324054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.324120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.324418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.324482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.324776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.324843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.325141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.325207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.325412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.325476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.325791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.339 [2024-12-08 06:32:14.325858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.339 qpair failed and we were unable to recover it. 00:28:24.339 [2024-12-08 06:32:14.326154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.326220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.326524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.326589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.326894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.326960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.327261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.327326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.327633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.327699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.327943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.328010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.328280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.328345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.328649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.328713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.329038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.329103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.329418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.329483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.329756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.329823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.330093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.330158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.330455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.330526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.330837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.330904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.331123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.331189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.331433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.331497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.331757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.331823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.332132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.332197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.332407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.332473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.332779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.332848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.333157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.333222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.333477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.333543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.333840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.333906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.334166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.334232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.334540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.334606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.334918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.334984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.335254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.335320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.335633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.335699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.335965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.336030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.336333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.336399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.336752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.336819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.337069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.337136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.337389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.337470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.337769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.337837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.338139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.338205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.338507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.338572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.338874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.338942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.340 qpair failed and we were unable to recover it. 00:28:24.340 [2024-12-08 06:32:14.339188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.340 [2024-12-08 06:32:14.339254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.339558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.339623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.339947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.340014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.340306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.340371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.340678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.340759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.341071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.341138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.341388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.341452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.341856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.341923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.342171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.342206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.342364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.342399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.342560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.342595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.342766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.342804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.342940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.342976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.343131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.343168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.343345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.343382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.343555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.343619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.343910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.343948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.344121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.344157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.344406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.344478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.344742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.344780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.344919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.344955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.345169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.345215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.345468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.345504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.345745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.345817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.345964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.346001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.346214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.346260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.346490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.346556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.346813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.346850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.347032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.347075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.347294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.347336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.347567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.347633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.347886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.347923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.348066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.348111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.348349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.348386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.348638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.348702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.348936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.348978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.349124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.349206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.349498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.341 [2024-12-08 06:32:14.349563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.341 qpair failed and we were unable to recover it. 00:28:24.341 [2024-12-08 06:32:14.349847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.349884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.350060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.350097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.350279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.350315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.350544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.350620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.350857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.350894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.351081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.351118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.351244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.351278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.351511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.351548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.351819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.351856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.352041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.352086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.352241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.352303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.352625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.352690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.353004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.353049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.353232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.353278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.353540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.353606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.353867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.353904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.354118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.354155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.354359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.354425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.354687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.354792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.354979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.355016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.355250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.355287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.355517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.355590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.355882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.355919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.356167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.356233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.356540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.356605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.356850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.356887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.357144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.357181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.357466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.357530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.357824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.357861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.358060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.358097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.358332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.358368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.358508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.358548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.358759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.358796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.358975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.359011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.359204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.359240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.359420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.359456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.359679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.359764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.342 qpair failed and we were unable to recover it. 00:28:24.342 [2024-12-08 06:32:14.359987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.342 [2024-12-08 06:32:14.360028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.360178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.360212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.360351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.360385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.360645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.360710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.360943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.360980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.361169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.361244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.361539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.361604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.361926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.361993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.362213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.362277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.362546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.362610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.362901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.362969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.363220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.363285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.363533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.363598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.363911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.363978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.364288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.364354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.364596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.364661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.364955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.365022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.365344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.365409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.365666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.365747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.366064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.366129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.366399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.366465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.366755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.366822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.367089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.367154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.367361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.367427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.367684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.367769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.368090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.368154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.368473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.368538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.368870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.368938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.369240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.369305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.369576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.369641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.369965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.370032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.370307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.370373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.370684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.370765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.371080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.343 [2024-12-08 06:32:14.371145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.343 qpair failed and we were unable to recover it. 00:28:24.343 [2024-12-08 06:32:14.371408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.371473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.371754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.371820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.372116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.372181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.372434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.372499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.372758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.372825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.373118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.373182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.373427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.373503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.373778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.373846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.374038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.374103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.374367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.374432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.374751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.374818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.375132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.375198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.375463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.375528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.375779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.375845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.376143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.376208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.376512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.376578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.376852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.376920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.377166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.377230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.377493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.377558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.377816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.377883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.378191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.378256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.378544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.378609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.378928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.378995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.379263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.379328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.379629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.379694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.379978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.380044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.380358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.380423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.380757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.380825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.381138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.381202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.381520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.381584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.381899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.381966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.382257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.382322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.382586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.382651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.382983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.383049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.383364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.383430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.383718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.383798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.384096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.384161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.344 [2024-12-08 06:32:14.384465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.344 [2024-12-08 06:32:14.384531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.344 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.384805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.384871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.385178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.385244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.385470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.385536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.385849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.385915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.386226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.386292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.386584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.386650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.386956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.387021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.387316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.387381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.387641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.387718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.388060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.388126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.388411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.388477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.388779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.388845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.389139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.389205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.389498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.389563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.389816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.389883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.390141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.390207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.390480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.390545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.390822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.390887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.391191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.391256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.391555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.391620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.391884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.391951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.392199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.392263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.392481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.392546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.392784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.392850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.393090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.393156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.393366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.393430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.393667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.393751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.393998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.394063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.394297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.394361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.394535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.394600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.394836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.394903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.395095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.395160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.395427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.395492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.395772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.395839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.396042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.396106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.396325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.396391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.396627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.345 [2024-12-08 06:32:14.396693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.345 qpair failed and we were unable to recover it. 00:28:24.345 [2024-12-08 06:32:14.396957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.397021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.397249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.397314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.397549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.397614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.397844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.397909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.398144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.398209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.398416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.398481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.398688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.398771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.399028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.399093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.399316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.399380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.399618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.399682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.399927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.399992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.400246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.400327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.400548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.400614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.400840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.400907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.401167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.401231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.401427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.401492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.401701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.401787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.402031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.402094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.402299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.402363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.402618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.402683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.402922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.402987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.403220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.403285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.403502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.403567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.403786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.403853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.404173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.404237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.404475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.404540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.404779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.404846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.405055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.405120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.405393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.405459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.405744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.405812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.406028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.406092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.406399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.406465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.406777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.406844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.407058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.407123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.407422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.407488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.407706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.407791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.408031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.408096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.346 [2024-12-08 06:32:14.408339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.346 [2024-12-08 06:32:14.408403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.346 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.408653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.408740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.408974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.409039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.409239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.409304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.409542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.409608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.409789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.409855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.410064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.410130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.410369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.410433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.410620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.410685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.410885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.410950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.411186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.411251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.411469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.411534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.411782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.411849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.412078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.412144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.412410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.412486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.412742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.412808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.413040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.413106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.413352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.413418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.413656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.413738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.413952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.414017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.414250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.414314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.414533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.414599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.414781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.414859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.415107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.415172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.415386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.415451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.415657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.415742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.415958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.416022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.416258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.416323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.416552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.416618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.416876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.416942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.417204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.417270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.417520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.417586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.417802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.417869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.418105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.418170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.418382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.347 [2024-12-08 06:32:14.418448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.347 qpair failed and we were unable to recover it. 00:28:24.347 [2024-12-08 06:32:14.418662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.418746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.418992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.419068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.419325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.419390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.419649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.419714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.419984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.420051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.420288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.420353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.420605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.420671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.420868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.420934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.421145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.421210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.421421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.421486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.421699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.421786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.422013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.422077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.422274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.422339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.422574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.422640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.422873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.422939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.423144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.423210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.423423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.423488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.423669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.423756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.423956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.424021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.424207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.424283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.424471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.424535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.424769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.424835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.425057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.425123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.425363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.425427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.425636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.425705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.425926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.425991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.426207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.426272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.426480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.426544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.426804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.426854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.427086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.427152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.427368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-12-08 06:32:14.427433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.348 qpair failed and we were unable to recover it. 00:28:24.348 [2024-12-08 06:32:14.427646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.626 [2024-12-08 06:32:14.427710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.427980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.428008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.428170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.428198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.428361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.428428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.428641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.428707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.428919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.428984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.429222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.429288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.429509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.429575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.429799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.429833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.429985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.430027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.430148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.430181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.430356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.430393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.430519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.430556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.430711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.430754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.430905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.430968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.431172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.431236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.431415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.431477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.431662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.431742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.431968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.432031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.432238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.432305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.432511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.432576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.432806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.432872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.433120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.433185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.433431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.433496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.433692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.433788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.433903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.433936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.434070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.434102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.434244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.434280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.434452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.434508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.434669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.434712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.434873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.434909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.435067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.435127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.435374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.435440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.435610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.435675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.435841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.435879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.435993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.627 [2024-12-08 06:32:14.436050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.627 qpair failed and we were unable to recover it. 00:28:24.627 [2024-12-08 06:32:14.436223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.436265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.436420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.436462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.436596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.436638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.436803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.436839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.436962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.436996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.437133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.437174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.437333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.437377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.437531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.437574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.437740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.437786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.437907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.437934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.438077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.438104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.438252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.438279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.438412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.438438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.438572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.438598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.438733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.438768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.438854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.438882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.439032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.439072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.439201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.439244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.439395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.439421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.439581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.439607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.439767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.439794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.439900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.439927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.440060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.440086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.440248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.440274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.440383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.440412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.440581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.440626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.440798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.440824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.440953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.440980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.628 qpair failed and we were unable to recover it. 00:28:24.628 [2024-12-08 06:32:14.441109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.628 [2024-12-08 06:32:14.441155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.441284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.441335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.441507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.441549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.441708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.441771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.441869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.441895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.442029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.442061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.442239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.442281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.442439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.442480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.442670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.442710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.442852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.442879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.442977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.443011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.443127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.443168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.443326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.443367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.443555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.443596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.443798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.443825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.443926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.443952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.444079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.444120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.444341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.444382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.444592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.444633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.444831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.444857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.444985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.445028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.445223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.445250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.445458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.445517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.445677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.445717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.445890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.445917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.446037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.446065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.446262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.446303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.446486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.446529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.446699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.446761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.629 qpair failed and we were unable to recover it. 00:28:24.629 [2024-12-08 06:32:14.446880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.629 [2024-12-08 06:32:14.446909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.447027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.447067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.447265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.447307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.447478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.447525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.447760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.447786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.447912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.447937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.448107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.448171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.448365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.448406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.448582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.448622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.448787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.448814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.448931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.448957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.449165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.449223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.449399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.449439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.449632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.449672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.449876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.449903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.450097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.450154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.450343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.450404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.450580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.450621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.450796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.450823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.450915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.450941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.451059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.451083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.451266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.451306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.451480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.451521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.451695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.451758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.451869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.451894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.452054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.452097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.452315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.452381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.452608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.452649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.452843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.452869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.452956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.452984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.453125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.453193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.453368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.453409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.453605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.453630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.453822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.453882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.630 [2024-12-08 06:32:14.454124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.630 [2024-12-08 06:32:14.454183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.630 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.454321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.454384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.454525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.454587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.454751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.454793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.454917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.454958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.455113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.455153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.455308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.455349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.455574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.455626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.455799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.455840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.455992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.456055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.456203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.456272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.456471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.456511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.456671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.456711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.456856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.456897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.457090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.457131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.457275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.457316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.457503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.457544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.457701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.457753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.457881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.457921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.458078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.458120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.458322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.458362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.458550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.458597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.458746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.458788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.458937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.459012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.459292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.459352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.459573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.459614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.459813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.459874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.460072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.460132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.460294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.460354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.631 [2024-12-08 06:32:14.460507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.631 [2024-12-08 06:32:14.460547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.631 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.460789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.460831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.460962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.461003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.461212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.461271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.461433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.461474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.461604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.461644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.461775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.461817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.461960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.462027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.462245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.462310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.462498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.462538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.462700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.462751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.462905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.462965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.463124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.463165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.463358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.463398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.463597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.463637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.463836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.463897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.464050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.464113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.464282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.464340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.464529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.464570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.464799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.464861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.465060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.465118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.465277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.465319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.465517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.465558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.465770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.465811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.465964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.466005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.466202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.466243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.466453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.632 [2024-12-08 06:32:14.466493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.632 qpair failed and we were unable to recover it. 00:28:24.632 [2024-12-08 06:32:14.466681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.466729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.466905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.466965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.467143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.467203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.467375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.467416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.467604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.467645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.467811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.467876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.468071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.468136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.468334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.468376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.468567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.468608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.468829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.468890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.469052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.469119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.469295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.469356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.469530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.469571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.469738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.469781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.469941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.469981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.470175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.470216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.470353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.470394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.470583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.470624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.470827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.470890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.471099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.471158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.471353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.471413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.471572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.471613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.471809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.471877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.472088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.472147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.472353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.472394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.472519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.472559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.472753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.472795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.472972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.473033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.473247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.473305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.473472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.473513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.473710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.473758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.473919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.633 [2024-12-08 06:32:14.473983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.633 qpair failed and we were unable to recover it. 00:28:24.633 [2024-12-08 06:32:14.474143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.474202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.474342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.474406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.474573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.474613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.474823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.474882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.475117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.475178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.475394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.475454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.475650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.475690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.475868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.475937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.476149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.476207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.476374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.476434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.476592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.476634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.476820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.476884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.477054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.477115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.477298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.477372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.477499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.477540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.477717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.477768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.477908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.477949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.478140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.478187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.478349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.478390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.478562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.478603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.478790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.478832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.478965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.479006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.479157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.479197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.479362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.479403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.479592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.479633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.479819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.479860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.480056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.480096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.480300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.480360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.480532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.480574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.480776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.480840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.634 [2024-12-08 06:32:14.481013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.634 [2024-12-08 06:32:14.481072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.634 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.481282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.481324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.481484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.481524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.481680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.481735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.481882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.481923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.482082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.482122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.482315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.482355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.482554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.482594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.482751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.482803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.482941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.482992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.483197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.483238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.483427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.483468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.483640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.483681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.483848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.483916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.484118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.484178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.484387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.484447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.484621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.484662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.484853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.484914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.485086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.485145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.485346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.485404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.485596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.485645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.485809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.485875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.486032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.486094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.486309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.486369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.486567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.486607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.486813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.486876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.487106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.487167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.487373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.487433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.487608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.487654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.487825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.487890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.488040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.488104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.488321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.488381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.488513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.635 [2024-12-08 06:32:14.488554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.635 qpair failed and we were unable to recover it. 00:28:24.635 [2024-12-08 06:32:14.488750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.488801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.488943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.489006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.489176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.489216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.489377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.489418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.489613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.489653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.489818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.489878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.490083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.490143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.490333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.490373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.490570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.490610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.490841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.490903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.491110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.491168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.491362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.491403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.491583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.491624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.491794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.491857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.492040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.492107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.492268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.492328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.492498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.492538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.492735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.492785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.492936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.493005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.493205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.493245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.493447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.493488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.493648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.493688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.493887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.493929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.494122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.494163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.494347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.494387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.494552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.494592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.494767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.494809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.494962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.495002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.495163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.495204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.495392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.495432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.495636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.636 [2024-12-08 06:32:14.495676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.636 qpair failed and we were unable to recover it. 00:28:24.636 [2024-12-08 06:32:14.495825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.495867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.496065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.496106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.496282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.496322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.496441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.496481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.496673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.496714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.496902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.496965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.497171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.497231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.497431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.497491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.497679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.497718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.497890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.497953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.498120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.498179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.498348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.498409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.498605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.498646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.498835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.498895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.499106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.499164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.499370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.499430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.499613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.499652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.499808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.499849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.499996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.500058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.500236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.500297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.500470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.500510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.500677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.500718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.500888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.500928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.501081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.501122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.501243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.501284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.501490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.501531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.501688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.501740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.501898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.501939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.502096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.502136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.637 [2024-12-08 06:32:14.502313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.637 [2024-12-08 06:32:14.502353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.637 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.502524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.502565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.502688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.502737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.502878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.502925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.503126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.503167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.503325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.503366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.503560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.503600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.503785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.503826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.503970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.504010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.504162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.504210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.504401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.504441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.504607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.504648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.504840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.504881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.505037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.505078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.505243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.505284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.505440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.505481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.505618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.505658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.505848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.505889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.506070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.506111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.506307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.506347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.506478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.506518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.506683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.506733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.506915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.506976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.507207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.507266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.507487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.507528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.507735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.507777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.507921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.507986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.508152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.508212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.638 [2024-12-08 06:32:14.508423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.638 [2024-12-08 06:32:14.508483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.638 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.508652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.508693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.508876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.508939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.509164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.509227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.509408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.509468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.509660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.509700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.509855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.509896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.510055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.510106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.510310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.510350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.510553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.510593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.510718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.510779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.510927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.510987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.511197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.511237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.511450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.511507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.511668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.511708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.511874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.511934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.512161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.512220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.512393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.512451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.512583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.512623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.512784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.512826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.512992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.513033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.513234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.513274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.513472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.513512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.513746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.513787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.513943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.514007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.514166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.514223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.514420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.514480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.514637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.514677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.514838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.514902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.639 [2024-12-08 06:32:14.515101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.639 [2024-12-08 06:32:14.515160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.639 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.515358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.515417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.515599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.515639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.515820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.515881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.516027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.516068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.516230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.516270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.516456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.516496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.516636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.516677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.516850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.516913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.517100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.517140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.517323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.517363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.517550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.517591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.517809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.517876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.518072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.518141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.518331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.518377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.518573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.518613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.518820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.518882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.519053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.519094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.519247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.519287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.519475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.519515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.519717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.519766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.519902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.519942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.520099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.520140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.520326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.520366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.520520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.520560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.520761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.520803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.520968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.521039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.521212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.521271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.521446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.521487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.640 [2024-12-08 06:32:14.521650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.640 [2024-12-08 06:32:14.521690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.640 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.521879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.521938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.522156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.522216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.522416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.522457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.522662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.522702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.522884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.522926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.523097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.523139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.523345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.523405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.523587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.523627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.523816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.523877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.524096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.524137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.524328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.524389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.524590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.524631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.524812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.524874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.525068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.525129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.525307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.525367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.525554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.525594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.525797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.525859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.526032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.526089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.526252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.526292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.526479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.526520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.526683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.526730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.526882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.526922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.527110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.527150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.527308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.527349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.527547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.527588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.527749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.527796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.528023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.528063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.528247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.528306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.528493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.528534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.528659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.528700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.528882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.528945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.529159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.529220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.529412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.529453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.529588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.529628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.529805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.529870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.641 qpair failed and we were unable to recover it. 00:28:24.641 [2024-12-08 06:32:14.530022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.641 [2024-12-08 06:32:14.530082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.530201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.530241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.530407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.530455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.530617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.530656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.530844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.530885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.531079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.531120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.531311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.531351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.531551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.531591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.531793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.531835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.531974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.532015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.532144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.532184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.532340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.532380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.532553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.532593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.532758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.532810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.532953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.533005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.533193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.533232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.533422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.533462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.533599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.533655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.533844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.533905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.534094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.534134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.534316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.534357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.534555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.534595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.534742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.534787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.534985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.535044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.535253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.535311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.535500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.535540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.535712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.535769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.535914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.535978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.536121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.536184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.536371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.642 [2024-12-08 06:32:14.536411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.642 qpair failed and we were unable to recover it. 00:28:24.642 [2024-12-08 06:32:14.536581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.536621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.536815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.536877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.537053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.537093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.537267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.537308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.537455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.537496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.537653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.537693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.537878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.537937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.538122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.538182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.538389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.538429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.538616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.538657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.538818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.538859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.539021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.539061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.539268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.539309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.539483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.539524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.539661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.539701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.539876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.539917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.540077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.540118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.540314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.540354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.540541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.540582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.540768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.540810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.540979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.541020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.541210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.541251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.541458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.541499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.541669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.541709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.643 qpair failed and we were unable to recover it. 00:28:24.643 [2024-12-08 06:32:14.541864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.643 [2024-12-08 06:32:14.541905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.542068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.542128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.542330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.542370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.542535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.542575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.542791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.542862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.542989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.543030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.543212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.543253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.543449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.543490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.543668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.543709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.543863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.543925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.544115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.544156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.544334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.544393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.544587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.544627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.544809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.544872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.545017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.545079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.545222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.545286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.545449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.545490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.545652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.545692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.545849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.545891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.546008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.546049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.546245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.546286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.546474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.546515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.546745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.546787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.546956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.547014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.547248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.547288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.547490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.547549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.547769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.547828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.548029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.548098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.548373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.548432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.548636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.548676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.548885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.548956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.549157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.549223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.644 qpair failed and we were unable to recover it. 00:28:24.644 [2024-12-08 06:32:14.549401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.644 [2024-12-08 06:32:14.549460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.549616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.549656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.549822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.549882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.550044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.550084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.550275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.550316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.550502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.550543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.550672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.550712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.550864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.550905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.551075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.551125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.551311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.551351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.551587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.551628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.551806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.551866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.552084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.552143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.552350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.552411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.552612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.552653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.552816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.552886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.553145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.553204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.553383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.553443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.553642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.553683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.553861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.553920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.554137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.554197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.554366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.554425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.554639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.554680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.554945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.555007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.555220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.555280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.555427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.555487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.555622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.555663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.555857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.555918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.556096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.556155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.645 [2024-12-08 06:32:14.556339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.645 [2024-12-08 06:32:14.556379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.645 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.556568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.556609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.556799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.556863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.557032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.557073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.557263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.557303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.557494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.557534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.557691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.557740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.557920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.557980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.558220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.558280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.558446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.558486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.558750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.558791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.558951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.558997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.559183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.559224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.559354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.559394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.559557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.559597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.559780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.559845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.560016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.560056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.560244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.560284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.560447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.560488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.560679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.560718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.560911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.560971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.561161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.561221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.561419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.561477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.561670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.561710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.561887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.561927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.562056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.562097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.562255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.562296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.562490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.562530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.562681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.562732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.562897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.562937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.646 [2024-12-08 06:32:14.563097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.646 [2024-12-08 06:32:14.563137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.646 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.563342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.563383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.563580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.563621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.563840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.563900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.564114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.564173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.564429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.564488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.564650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.564690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.564897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.564957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.565158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.565224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.565455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.565512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.565712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.565775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.565975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.566042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.566286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.566346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.566508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.566549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.566741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.566782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.566936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.566998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.567174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.567233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.567503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.567543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.567767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.567808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.568018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.568083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.568301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.568356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.568542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.568580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.568793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.568857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.569057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.569118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.569345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.569413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.569600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.569640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.569800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.569860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.570074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.570137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.570359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.570421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.570640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.570681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.570904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.570963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.571182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.571244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.571504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.571564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.571837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.571896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.647 [2024-12-08 06:32:14.572043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.647 [2024-12-08 06:32:14.572105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.647 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.572327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.572399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.572674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.572716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.572906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.572965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.573177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.573218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.573449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.573512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.573753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.573795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.574020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.574083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.574312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.574377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.574608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.574648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.574835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.574895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.575037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.575100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.575308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.575370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.575599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.575640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.575884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.575946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.576108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.576173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.576370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.576433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.576619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.576660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.576885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.576949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.577166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.577228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.577421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.577488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.577660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.577701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.577928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.577994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.578214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.578274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.578499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.578544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.578710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.578763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.578985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.579057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.579249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.579322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.579513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.648 [2024-12-08 06:32:14.579554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.648 qpair failed and we were unable to recover it. 00:28:24.648 [2024-12-08 06:32:14.579731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.579774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.579914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.579979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.580165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.580239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.580428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.580469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.580638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.580679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.580881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.580954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.581146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.581187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.581379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.581449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.581600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.581641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.581807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.581849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.582041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.582083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.582276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.582317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.582459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.582500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.582666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.582708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.582924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.582997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.583186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.583227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.583384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.583426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.583621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.583661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.583887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.583956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.584129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.584195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.584372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.584434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.584623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.584664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.584869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.584911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.585113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.585176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.585352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.585412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.585569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.585610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.585819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.585861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.586059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.649 [2024-12-08 06:32:14.586135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.649 qpair failed and we were unable to recover it. 00:28:24.649 [2024-12-08 06:32:14.586303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.586364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.586487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.586528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.586686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.586734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.586875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.586916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.587070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.587112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.587298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.587339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.587522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.587563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.587714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.587765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.587899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.587942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.588128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.588169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.588292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.588332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.588485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.588525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.588744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.588787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.588979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.589021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.589181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.589222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.589367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.589408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.589534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.589574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.589756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.589802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.589971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.590031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.590190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.590249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.590452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.590493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.590674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.590716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.590915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.590974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.591122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.591189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.591353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.591394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.591541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.591582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.591761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.591813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.591948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.591989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.592127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.592168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.592350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.592390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.592576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.650 [2024-12-08 06:32:14.592616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.650 qpair failed and we were unable to recover it. 00:28:24.650 [2024-12-08 06:32:14.592790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.592832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.592985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.593027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.593214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.593255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.593434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.593475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.593653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.593694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.593857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.593920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.594108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.594167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.594325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.594386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.594538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.594578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.594746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.594791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.594949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.595010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.595137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.595177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.595332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.595373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.595527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.595568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.595688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.595741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.595897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.595940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.596087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.596129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.596313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.596354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.596513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.596554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.596686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.596750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.596942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.596983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.597135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.597176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.597300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.597341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.597499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.597542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.597699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.597752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.597892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.597934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.651 [2024-12-08 06:32:14.598117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.651 [2024-12-08 06:32:14.598157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.651 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.598311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.598351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.598536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.598578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.598745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.598786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.598978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.599034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.599226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.599267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.599417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.599458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.599629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.599665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.599909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.599974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.600186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.600250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.600432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.600479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.600689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.600739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.600907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.600969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.601152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.601193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.601344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.601384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.601516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.601557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.601744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.601790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.601955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.602015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.602195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.602253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.602418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.602460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.602639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.602680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.602902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.602972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.603174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.603243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.603427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.603468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.603660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.603701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.603833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.603875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.604065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.604123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.604322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.604381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.604561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.652 [2024-12-08 06:32:14.604602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.652 qpair failed and we were unable to recover it. 00:28:24.652 [2024-12-08 06:32:14.604795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.604859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.605028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.605087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.605272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.605317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.605503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.605548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.605698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.605751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.605912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.605954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.606114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.606158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.606283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.606325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.606483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.606530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.606673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.606713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.606908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.606949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.607080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.607121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.607274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.607315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.607495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.607536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.607669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.607710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.607870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.607915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.608084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.608129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.608289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.608330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.608488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.608529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.608709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.608759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.608916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.608957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.609112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.609153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.609327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.609368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.609530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.609570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.609729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.609771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.609956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.609998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.610195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.653 [2024-12-08 06:32:14.610236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.653 qpair failed and we were unable to recover it. 00:28:24.653 [2024-12-08 06:32:14.610417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.610458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.610649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.610694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.610902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.610943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.611145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.611203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.611359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.611418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.611570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.611610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.611783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.611856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.612058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.612115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.612285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.612343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.612506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.612550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.612705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.612771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.612942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.612984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.613111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.613151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.613332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.613374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.613559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.613602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.613738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.613780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.613908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.613949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.614090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.614130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.614280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.614324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.614513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.614555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.614673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.614713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.614921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.614980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.615171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.615248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.615447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.615487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.615641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.615679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.615871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.615911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.616093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.616154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.616355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.616399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.654 qpair failed and we were unable to recover it. 00:28:24.654 [2024-12-08 06:32:14.616579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.654 [2024-12-08 06:32:14.616618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.616754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.616794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.616977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.617027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.617214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.617274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.617426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.617465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.617641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.617680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.617873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.617930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.618106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.618164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.618317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.618356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.618479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.618517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.618663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.618702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.618862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.618918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.619117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.619160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.619344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.619385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.619556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.619604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.619795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.619837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.620023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.620064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.620261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.620302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.620474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.620523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.620688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.620754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.620889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.620931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.621167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.621214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.621374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.621439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.621610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.621654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.621883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.621941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.622150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.622215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.622422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.622485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.622716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.622777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.622943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.623001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.655 qpair failed and we were unable to recover it. 00:28:24.655 [2024-12-08 06:32:14.623219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.655 [2024-12-08 06:32:14.623281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.623520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.623589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.623756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.623807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.623994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.624035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.624241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.624302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.624535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.624579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.624762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.624812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.624978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.625035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.625180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.625247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.625485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.625529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.625693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.625751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.625916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.625975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.626216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.626277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.626520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.626585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.626758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.626812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.626945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.627005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.627189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.627247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.627462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.627503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.627684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.627743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.627940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.628002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.628289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.628343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.628548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.628590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.628804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.628863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.629115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.629141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.629335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.629406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.629645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.629686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.629858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.629915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.630145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.630186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.630389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.630431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.630671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.630711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.656 [2024-12-08 06:32:14.630875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.656 [2024-12-08 06:32:14.630933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.656 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.631126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.631197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.631370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.631411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.631631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.631673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.631844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.631886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.632041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.632082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.632236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.632277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.632429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.632470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.632591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.632647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.632827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.632869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.633025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.633070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.633240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.633283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.633413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.633458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.633638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.633679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.633911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.633953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.634123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.634164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.634355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.634397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.634598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.634639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.634819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.634884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.635117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.635183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.635426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.635487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.635644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.635684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.635860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.635923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.636150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.636212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.636465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.636527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.636705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.636759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.636946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.637006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.637267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.637339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.637514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.637554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.637796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.637838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.638019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.638067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.638238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.638279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.657 qpair failed and we were unable to recover it. 00:28:24.657 [2024-12-08 06:32:14.638442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.657 [2024-12-08 06:32:14.638483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.638620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.638661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.638870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.638912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.639167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.639222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.639472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.639513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.639742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.639788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.639944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.640014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.640247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.640308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.640566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.640627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.640837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.640899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.641110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.641167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.641345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.641407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.641670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.641729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.641910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.641976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.642180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.642244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.642419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.642492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.642641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.642683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.642878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.642920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.643110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.643151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.643373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.643419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.643669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.643712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.643875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.643917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.644171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.644238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.644446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.644507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.644702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.644764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.644932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.644990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.645268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.645339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.645543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.645605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.645793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.645856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.646047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.658 [2024-12-08 06:32:14.646115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.658 qpair failed and we were unable to recover it. 00:28:24.658 [2024-12-08 06:32:14.646353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.646414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.646656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.646701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.646879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.646941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.647134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.647195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.647434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.647495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.647682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.647731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.647910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.647986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.648217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.648278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.648460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.648519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.648714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.648774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.648955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.649004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.649183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.649247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.649417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.649481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.649645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.649686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.649900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.649962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.650107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.650168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.650411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.650461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.650679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.650737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.650912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.650954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.651103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.651144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.651298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.651339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.651521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.651563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.651698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.651752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.651944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.651985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.652114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.652156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.652315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.652355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.652511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.652552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.652731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.652773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.659 [2024-12-08 06:32:14.652935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.659 [2024-12-08 06:32:14.652976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.659 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.653185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.653226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.653403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.653455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.653613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.653654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.653860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.653902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.654072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.654117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.654253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.654294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.654432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.654472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.654689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.654746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.654960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.655032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.655246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.655305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.655488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.655530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.655777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.655819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.655973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.656046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.656215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.656275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.656497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.656561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.656801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.656865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.657038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.657098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.657268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.657332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.657526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.657570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.657736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.657777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.657910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.657950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.658113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.658167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.658363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.658404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.658563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.658604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.658809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.658851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.659023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.659074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.659203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.659244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.659408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.659448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.659635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.659676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.659821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.659863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.660025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.660066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.660222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.660263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.660388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.660427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.660 [2024-12-08 06:32:14.660576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.660 [2024-12-08 06:32:14.660616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.660 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.660791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.660833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.661014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.661055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.661238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.661279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.661436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.661476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.661663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.661703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.661903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.661944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.662161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.662219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.662383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.662423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.662586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.662627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.662807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.662869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.663094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.663152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.663363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.663423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.663579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.663620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.663797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.663863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.664027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.664101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.664284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.664325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.664475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.664515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.664700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.664753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.664890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.664931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.665087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.665127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.665309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.665350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.665531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.665572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.665751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.665793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.665961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.661 [2024-12-08 06:32:14.666030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.661 qpair failed and we were unable to recover it. 00:28:24.661 [2024-12-08 06:32:14.666164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.666228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.666412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.666452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.666604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.666646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.666822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.666882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.667055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.667115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.667297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.667338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.667516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.667556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.667740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.667783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.667961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.668002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.668119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.668160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.668339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.668380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.668568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.668609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.668781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.668849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.669054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.669117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.669283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.669344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.669521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.669562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.669717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.669769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.669943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.670012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.670191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.670232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.670380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.670422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.670575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.670616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.670742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.670783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.670939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.670980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.671126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.671167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.671313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.671354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.671504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.671544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.671690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.671741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.671910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.671951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.672128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.672169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.672318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.672358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.672470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.672511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.662 [2024-12-08 06:32:14.672646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.662 [2024-12-08 06:32:14.672687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.662 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.672895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.672936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.673093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.673133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.673286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.673327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.673474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.673514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.673669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.673709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.673883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.673925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.674078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.674119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.674261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.674301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.674421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.674462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.674614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.674655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.674843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.674884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.675038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.675078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.675228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.675269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.675457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.675497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.675676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.675717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.675882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.675923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.676053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.676093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.676215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.676255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.676407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.676448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.676594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.676634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.676786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.676828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.676980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.677021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.677171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.677212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.677392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.677433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.677559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.677600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.677732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.677773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.677954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.678001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.678193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.678233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.678381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.678422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.678539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.678579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.678780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.678850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.679002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.679043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.663 qpair failed and we were unable to recover it. 00:28:24.663 [2024-12-08 06:32:14.679199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.663 [2024-12-08 06:32:14.679240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.679391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.679431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.679575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.679615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.679775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.679816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.679997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.680037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.680154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.680195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.680372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.680412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.680593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.680633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.680765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.680805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.680951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.681015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.681171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.681229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.681378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.681419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.681597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.681638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.681833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.681894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.682068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.682132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.682309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.682349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.682463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.682503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.682656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.682697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.682863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.682926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.683119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.683178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.683360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.683400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.683526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.683573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.683735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.683777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.683940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.683998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.684176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.684217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.684369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.684410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.684558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.684597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.684780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.684821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.685008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.685049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.685229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.685270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.664 qpair failed and we were unable to recover it. 00:28:24.664 [2024-12-08 06:32:14.685428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.664 [2024-12-08 06:32:14.685469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.685650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.685691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.685832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.685873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.686074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.686139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.686309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.686368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.686556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.686597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.686756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.686798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.686948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.687012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.687184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.687243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.687423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.687464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.687585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.687625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.687803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.687845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.687997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.688037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.688198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.688259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.688440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.688480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.688657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.688698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.688860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.688923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.689102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.689143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.689337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.689395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.689524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.689564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.689745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.689787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.689957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.690018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.690181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.690243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.690428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.690469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.690590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.690630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.690799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.690864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.691018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.691079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.691266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.691306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.691461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.691502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.691686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.691734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.691931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.691995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.665 [2024-12-08 06:32:14.692166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.665 [2024-12-08 06:32:14.692225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.665 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.692401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.692449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.692563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.692603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.692803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.692872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.693042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.693102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.693257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.693298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.693453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.693494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.693669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.693709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.693839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.693881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.694071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.694111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.694299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.694339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.694504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.694545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.694753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.694795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.694987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.695028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.695240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.695299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.695495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.695536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.695693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.695741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.695915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.695983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.696192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.696259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.696507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.696565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.696825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.696886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.697122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.697183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.697421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.697461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.697611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.697661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.697871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.697931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.698122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.698183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.698387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.698446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.698636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.698677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.698864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.698930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.699196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.666 [2024-12-08 06:32:14.699255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.666 qpair failed and we were unable to recover it. 00:28:24.666 [2024-12-08 06:32:14.699462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.699524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.699802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.699865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.700073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.700140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.700351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.700413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.700602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.700643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.700844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.700905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.701093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.701153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.701373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.701433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.701612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.701652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.701828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.701890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.702095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.702155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.702421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.702461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.702682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.702732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.702918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.702989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.703219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.703260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.703504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.703565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.703812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.703874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.704126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.704167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.704370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.704411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.704602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.704643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.704812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.704854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.705033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.705096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.705257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.705298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.705488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.705528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.705732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.705780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.705925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.705988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.706164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.706204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.706423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.706463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.706629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.706670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.706844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.667 [2024-12-08 06:32:14.706885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.667 qpair failed and we were unable to recover it. 00:28:24.667 [2024-12-08 06:32:14.707061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.707101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.707257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.707297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.707472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.707512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.707633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.707673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.707834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.707876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.707989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.708030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.708246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.708286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.708499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.708539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.708697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.708761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.708954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.709005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.709176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.709234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.709361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.709402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.709563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.709612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.709852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.709893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.710015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.710060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.710246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.710287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.710478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.710519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.710676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.710717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.710894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.710953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.711209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.711270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.711521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.711562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.711745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.711787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.711955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.712023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.712271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.712332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.712532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.712572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.712700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.712775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.712910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.712988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.713247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.713307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.713455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.713495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.713648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.713691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.713913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.668 [2024-12-08 06:32:14.713984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.668 qpair failed and we were unable to recover it. 00:28:24.668 [2024-12-08 06:32:14.714198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.714259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.714457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.714496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.714651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.714692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.714889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.714949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.715111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.715151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.715322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.715362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.715579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.715620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.715834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.715897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.716111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.716152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.716343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.716384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.716620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.716660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.716840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.716903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.717079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.717141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.717343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.717401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.717583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.717624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.717787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.717857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.718063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.718132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.718294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.718353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.718555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.718596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.718801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.718863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.719036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.719098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.719264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.719304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.719493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.719534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.719735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.719786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.719936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.719986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.720159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.720200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.720438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.720479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.720711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.720769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.720968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.721009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.721209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.721270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.721514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.721573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.721800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.721860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.722062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.722124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.722303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.722361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.669 qpair failed and we were unable to recover it. 00:28:24.669 [2024-12-08 06:32:14.722578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.669 [2024-12-08 06:32:14.722619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.670 qpair failed and we were unable to recover it. 00:28:24.670 [2024-12-08 06:32:14.722838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.670 [2024-12-08 06:32:14.722898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.670 qpair failed and we were unable to recover it. 00:28:24.670 [2024-12-08 06:32:14.723105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.670 [2024-12-08 06:32:14.723164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.670 qpair failed and we were unable to recover it. 00:28:24.670 [2024-12-08 06:32:14.723368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.670 [2024-12-08 06:32:14.723425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.670 qpair failed and we were unable to recover it. 00:28:24.670 [2024-12-08 06:32:14.723663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.670 [2024-12-08 06:32:14.723704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.670 qpair failed and we were unable to recover it. 00:28:24.670 [2024-12-08 06:32:14.723918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.670 [2024-12-08 06:32:14.723979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.670 qpair failed and we were unable to recover it. 00:28:24.670 [2024-12-08 06:32:14.724146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.670 [2024-12-08 06:32:14.724209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.670 qpair failed and we were unable to recover it. 00:28:24.670 [2024-12-08 06:32:14.724415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.670 [2024-12-08 06:32:14.724476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.670 qpair failed and we were unable to recover it. 00:28:24.670 [2024-12-08 06:32:14.724648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.670 [2024-12-08 06:32:14.724689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.670 qpair failed and we were unable to recover it. 00:28:24.670 [2024-12-08 06:32:14.724899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.670 [2024-12-08 06:32:14.724961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.670 qpair failed and we were unable to recover it. 00:28:24.670 [2024-12-08 06:32:14.725158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.670 [2024-12-08 06:32:14.725217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.670 qpair failed and we were unable to recover it. 00:28:24.670 [2024-12-08 06:32:14.725444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.670 [2024-12-08 06:32:14.725495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.670 qpair failed and we were unable to recover it. 00:28:24.670 [2024-12-08 06:32:14.725660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.670 [2024-12-08 06:32:14.725706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.670 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.725914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.725986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.726111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.726153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.726343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.726383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.726543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.726584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.726761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.726803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.726967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.727014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.727182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.727222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.727386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.727427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.727619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.727659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.727839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.727880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.728099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.728139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.728305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.728346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.728507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.728548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.728709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.728778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.728909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.728950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.729136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.729182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.729417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.729458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.729620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.729660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.729823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.729864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.730017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.730057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.730238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.730279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.730465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.730505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.730717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.730778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.730940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.731007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.731247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.731286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.731448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.731500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.731740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.731785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.731998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.732038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.732207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.732267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.732468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.732529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.732739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.732780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.732910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.732951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.733119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.733180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.733427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.733485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.733768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.733819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.733961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.734034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.734208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.947 [2024-12-08 06:32:14.734267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.947 qpair failed and we were unable to recover it. 00:28:24.947 [2024-12-08 06:32:14.734437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.734499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.734688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.734738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.734928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.734993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.735192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.735251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.735466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.735525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.735709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.735758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.735925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.735966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.736128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.736186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.736352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.736392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.736546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.736586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.736712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.736773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.736931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.736983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.737145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.737192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.737358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.737399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.737617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.737657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.737809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.737850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.737982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.738034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.738240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.738281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.738475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.738516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.738680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.738731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.738914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.738973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.739170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.739228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.739412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.739471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.739595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.739648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.739883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.739942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.740094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.740160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.740356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.740419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.740613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.740654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.740863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.740924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.741152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.741215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.741451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.741515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.741759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.741801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.741976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.742044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.742289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.742350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.742592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.742633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.742868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.742930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.743113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.743173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.743404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.743465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.743656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.743697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.743914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.743974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.744194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.744253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.744411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.744473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.744675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.744715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.744861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.744929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.745171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.745213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.745454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.745515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.745676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.745718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.745928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.745992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.746199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.746258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.746487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.746547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.746705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.746762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.746971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.747036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.747262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.948 [2024-12-08 06:32:14.747321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.948 qpair failed and we were unable to recover it. 00:28:24.948 [2024-12-08 06:32:14.747545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.747586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.747805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.747872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.748053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.748116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.748358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.748419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.748681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.748743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.749009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.749084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.749349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.749408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.749676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.749717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.749905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.749965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.750164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.750224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.750395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.750457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.750647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.750688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.751006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.751065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.751342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.751403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.751611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.751662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.751922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.751982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.752253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.752312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.752567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.752627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.752843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.752909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.753106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.753166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.753370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.753430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.753684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.753735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.753955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.754016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.754208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.754267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.754411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.754477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.754694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.754745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.754983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.755024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.755228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.755289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.755506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.755569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.755782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.755851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.756121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.756179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.756361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.756421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.756674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.756715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.756949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.757012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.757265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.757323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.757501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.757562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.757775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.757845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.758087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.758148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.758321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.758381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.758635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.758677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.758935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.758994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.759175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.759232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.759451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.759511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.759712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.759763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.759983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.760046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.760234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.760286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.760538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.760596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.760813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.760877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.761106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.761165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.761376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.761435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.761600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.761641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.761802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.761844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.762061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.762120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.949 qpair failed and we were unable to recover it. 00:28:24.949 [2024-12-08 06:32:14.762360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.949 [2024-12-08 06:32:14.762419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.762546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.762597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.762799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.762860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.763084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.763145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.763391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.763451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.763669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.763710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.763982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.764042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.764296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.764354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.764581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.764622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.764836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.764896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.765056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.765116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.765323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.765383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.765556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.765597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.765759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.765802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.766024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.766084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.766298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.766357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.766545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.766585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.766817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.766879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.767072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.767133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.767321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.767382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.767644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.767685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.767939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.767997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.768233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.768291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.768521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.768562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.768780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.768851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.769065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.769124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.769378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.769437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.769659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.769699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.769878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.769940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.770094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.770157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.770333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.770391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.770629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.770670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.770876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.770934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.771187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.771250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.771445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.771505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.771703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.771757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.771945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.772005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.772198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.772238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.772459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.772517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.772759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.772800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.773036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.773098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.773375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.773442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.773756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.773797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.773970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.774039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.774307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.950 [2024-12-08 06:32:14.774367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.950 qpair failed and we were unable to recover it. 00:28:24.950 [2024-12-08 06:32:14.774585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.774646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.774850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.774891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.775105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.775166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.775328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.775385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.775574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.775615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.775739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.775780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.775921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.775989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.776225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.776285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.776532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.776591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.776818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.776885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.777125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.777185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.777423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.777482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.777743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.777785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.778053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.778114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.778386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.778446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.778654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.778702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.778890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.778933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.779192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.779251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.779491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.779549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.779693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.779768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.780040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.780106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.780372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.780432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.780662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.780702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.780964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.781006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.781264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.781323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.781556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.781616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.781790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.781832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.782034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.782099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.782306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.782365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.782569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.782609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.782738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.782780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.782929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.782994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.783207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.783268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.783461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.783519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.783715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.783783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.784031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.784096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.784354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.784415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.784614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.784655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.784935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.784995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.785229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.785290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.785500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.785561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.785754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.785796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.786056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.786115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.786375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.786434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.786671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.786713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.786945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.787003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.787183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.787243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.787406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.787469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.787658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.787699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.787980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.788056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.788346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.788404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.788570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.788610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.788813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.788874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.789132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.789190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.951 [2024-12-08 06:32:14.789389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.951 [2024-12-08 06:32:14.789446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.951 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.789646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.789687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.789973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.790039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.790211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.790272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.790465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.790526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.790738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.790780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.791041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.791100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.791357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.791416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.791575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.791616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.791837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.791899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.792178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.792239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.792499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.792559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.792795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.792855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.793014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.793075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.793290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.793332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.793545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.793585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.793798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.793860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.794064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.794106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.794299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.794358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.794614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.794656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.794923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.794985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.795229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.795289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.795450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.795511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.795702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.795755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.795960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.796001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.796262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.796323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.796499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.796556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.796804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.796872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.797109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.797170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.797421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.797487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.797750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.797792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.798001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.798059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.798197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.798259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.798444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.798504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.798744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.798786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.799046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.799107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.799379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.799439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.799646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.799687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.799845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.799908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.800111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.800170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.800354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.800415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.800620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.800661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.800936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.800998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.801165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.801226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.801449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.801506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.801758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.801801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.802013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.802074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.802254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.802314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.802574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.802635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.802830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.802891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.803106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.803166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.803389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.803448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.803698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.803748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.804026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.952 [2024-12-08 06:32:14.804089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.952 qpair failed and we were unable to recover it. 00:28:24.952 [2024-12-08 06:32:14.804301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.804361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.804600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.804640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.804848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.804891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.805157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.805219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.805540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.805607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.805882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.805944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.806173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.806233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.806462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.806521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.806771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.806813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.806998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.807058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.807312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.807372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.807635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.807676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.807822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.807864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.808086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.808145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.808406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.808466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.808684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.808736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.808972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.809039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.809290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.809350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.809591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.809649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.809846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.809888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.810098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.810158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.810355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.810414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.810645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.810685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.810906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.810969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.811201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.811261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.811477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.811536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.811786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.811829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.812022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.812088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.812357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.812416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.812630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.812671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.812981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.813044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.813303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.813362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.813629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.813689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.813913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.813955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.814210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.814269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.814494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.814554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.814784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.814854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.815070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.815130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.815386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.815447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.815610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.815651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.815855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.815916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.816091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.816151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.816381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.816441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.816610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.816651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.816940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.817000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.817225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.817284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.817476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.817516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.817670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.817711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.953 [2024-12-08 06:32:14.817921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.953 [2024-12-08 06:32:14.817981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.953 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.818198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.818258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.818423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.818483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.818695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.818748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.818971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.819041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.819281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.819342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.819550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.819591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.819806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.819869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.820096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.820157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.820371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.820413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.820622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.820664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.820961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.821022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.821294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.821352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.821530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.821572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.821813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.821876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.822094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.822155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.822389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.822449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.822644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.822686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.822968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.823035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.823272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.823335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.823597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.823641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.823854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.823916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.824142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.824184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.824431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.824494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.824740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.824782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.824964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.825026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.825293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.825352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.825583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.825647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.825917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.825959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.826183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.826245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.826462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.826521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.826751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.826793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.827010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.827070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.827333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.827393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.827656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.827698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.827928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.827970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.828237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.828305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.828553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.828613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.828882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.828925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.829186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.829246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.829466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.829526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.829683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.829733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.829976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.830041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.830219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.830285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.830550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.830609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.830840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.830908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.831180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.831241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.831537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.831595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.831860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.831926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.832197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.832256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.832455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.832518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.832770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.832813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.833031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.833091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.833356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.833416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.954 [2024-12-08 06:32:14.833667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.954 [2024-12-08 06:32:14.833708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.954 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.833925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.833967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.834229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.834288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.834499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.834569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.834746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.834799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.835027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.835090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.835346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.835405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.835627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.835669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.835901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.835969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.836244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.836286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.836553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.836615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.836885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.836945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.837153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.837217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.837477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.837519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.837771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.837815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.838087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.838147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.838418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.838479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.838643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.838691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.838899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.838960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.839180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.839226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.839477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.839539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.839817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.839880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.840123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.840183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.840382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.840451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.840664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.840706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.840941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.841000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.841207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.841269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.841505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.841551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.841758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.841799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.842036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.842078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.842314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.842374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.842582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.842623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.842867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.842929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.843131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.843173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.843383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.843443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.843656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.843699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.843921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.843988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.844257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.844318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.844550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.844610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.844851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.844913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.845168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.845230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.845439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.845500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.845757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.845799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.846047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.846110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.846325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.846385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.846649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.846690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.846904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.846965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.847172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.847233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.847470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.847533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.847790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.847832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.848008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.848074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.848291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.848357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.848538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.848579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.848841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.848903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.955 [2024-12-08 06:32:14.849171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.955 [2024-12-08 06:32:14.849230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.955 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.849450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.849516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.849750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.849792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.850006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.850069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.850338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.850398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.850655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.850697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.850952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.850994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.851258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.851319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.851546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.851614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.851848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.851911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.852148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.852211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.852489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.852550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.852778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.852820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.853031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.853094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.853348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.853409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.853619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.853662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.853911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.853973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.854185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.854244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.854473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.854540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.854772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.854836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.855105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.855166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.855420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.855481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.855695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.855746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.855981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.856045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.856284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.856345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.856605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.856646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.856783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.856825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.857065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.857108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.857369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.857430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.857654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.857695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.857859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.857923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.858126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.858194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.858408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.858469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.858670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.858711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.858986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.859056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.859343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.859403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.859618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.859660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.859886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.859954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.860144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.860185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.860467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.860535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.860750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.860800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.861068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.861127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.861355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.861418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.861675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.861718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.861951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.861994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.862229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.862288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.862499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.862562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.862790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.862858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.863079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.863139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.863334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.863394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.863644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.863686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.863929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.863990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.864214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.864276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.864484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.864544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.956 [2024-12-08 06:32:14.864694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.956 [2024-12-08 06:32:14.864773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.956 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.865020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.865088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.865338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.865397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.865540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.865581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.865705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.865761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.865950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.866011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.866237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.866296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.866550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.866592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.866828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.866890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.867110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.867152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.867366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.867413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.867578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.867619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.867812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.867870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.868098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.868161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.868427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.868497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.868754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.868797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.869018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.869078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.869289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.869351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.869583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.869625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.869869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.869930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.870157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.870218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.870477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.870538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.870745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.870795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.871053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.871118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.871368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.871429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.871650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.871691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.871946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.871988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.872207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.872268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.872478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.872550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.872818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.872879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.873149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.873208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.873452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.873517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.873707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.873757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.874006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.874073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.874325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.874385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.874648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.874689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.874901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.874946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.875174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.875216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.875483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.875510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.875762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.875820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.876002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.876072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.876276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.876337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.876560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.876620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.876885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.876949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.877214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.877273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.877542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.877603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.877828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.877890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.878070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.878129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.878349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.878415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.957 [2024-12-08 06:32:14.878618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.957 [2024-12-08 06:32:14.878659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.957 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.878943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.879006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.879173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.879241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.879509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.879573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.879745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.879786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.880038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.880098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.880341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.880383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.880643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.880685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.880963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.881027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.881301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.881365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.881631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.881673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.881913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.881956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.882184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.882244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.882460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.882528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.882784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.882827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.883039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.883102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.883373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.883433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.883654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.883697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.883948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.883990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.884137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.884205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.884412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.884476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.884754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.884797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.885090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.885153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.885410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.885469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.885742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.885785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.886029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.886070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.886342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.886402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.886690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.886770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.887037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.887079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.887305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.887374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.887594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.887654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.887865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.887908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.888065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.888135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.888320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.888383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.888631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.888674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.888933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.888993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.889165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.889224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.889440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.889505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.889765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.889808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.890029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.890088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.890359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.890417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.890600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.890642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.890868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.890931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.891163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.891227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.891502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.891563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.891827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.891889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.892171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.892232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.958 [2024-12-08 06:32:14.892516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.958 [2024-12-08 06:32:14.892583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.958 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.892846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.892908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.893174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.893236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.893457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.893519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.893761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.893805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.894068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.894129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.894388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.894448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.894635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.894677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.894885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.894954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.895273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.895345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.895611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.895672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.895946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.896007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.896220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.896282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.896543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.896610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.896815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.896878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.897109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.897171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.897400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.897462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.897670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.897716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.897946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.898009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.898224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.898289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.898555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.898621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.898839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.898901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.899125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.899186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.899403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.899471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.899740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.899783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.899993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.900052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.900305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.900381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.900640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.900682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.900950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.900991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.901162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.901223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.901410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.901472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.901758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.901801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.902004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.902045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.902277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.902338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.902588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.902652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.902921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.902964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.903226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.903287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.903545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.959 [2024-12-08 06:32:14.903588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.959 qpair failed and we were unable to recover it. 00:28:24.959 [2024-12-08 06:32:14.903859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.903922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.904133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.904207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.904434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.904507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.904714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.904778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.904966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.905028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.905319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.905362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.905620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.905680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.905912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.905954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.906213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.906274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.906560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.906620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.906834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.906876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.907130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.907203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.907468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.907531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.907844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.907888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.908151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.908212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.908504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.908565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.908779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.908822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.909059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.909119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.909439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.909501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.909755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.909798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.910019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.910081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.910313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.910373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.910530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.910573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.910804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.910875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.911151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.911222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.911501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.911565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.911696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.911753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.911899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.911967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.912251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.912310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.912512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.912553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.912782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.912849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.913108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.913168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.913392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.913452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.913669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.913711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.913981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.914044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.914303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.914363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.914536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.914577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.960 [2024-12-08 06:32:14.914794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.960 [2024-12-08 06:32:14.914859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.960 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.915109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.915172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.915381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.915442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.915662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.915705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.915926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.915987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.916252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.916321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.916530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.916574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.916788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.916852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.917118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.917179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.917438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.917496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.917630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.917672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.917905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.917965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.918230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.918291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.918530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.918571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.918780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.918848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.919082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.919124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.919334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.919403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.919631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.919673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.919941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.920004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.920228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.920287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.920502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.920543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.920776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.920819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.921039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.921105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.921320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.921379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.921594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.921640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.921870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.921932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.922099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.922158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.922384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.922446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.922694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.922745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.923021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.923081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.923352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.923413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.923635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.923682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.961 qpair failed and we were unable to recover it. 00:28:24.961 [2024-12-08 06:32:14.923872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.961 [2024-12-08 06:32:14.923932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.924167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.924230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.924503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.924564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.924783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.924855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.925109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.925170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.925355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.925419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.925666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.925708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.925951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.926011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.926225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.926284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.926510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.926572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.926801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.926865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.927102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.927162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.927440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.927513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.927733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.927776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.928022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.928085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.928300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.928359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.928585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.928651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.928885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.928946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.929216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.929277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.929555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.929615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.929744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.929786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.929951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.930016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.930188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.930258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.930466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.930540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.930711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.930780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.931070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.931117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.931383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.931443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.931680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.931731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.931951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.932011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.932226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.932297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.932552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.932613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.932839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.932900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.933068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.933130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.933375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.933436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.933708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.933775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.933969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.934010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.934250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.934313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.934576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.934638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.962 qpair failed and we were unable to recover it. 00:28:24.962 [2024-12-08 06:32:14.934898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.962 [2024-12-08 06:32:14.934945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.935173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.935234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.935476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.935535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.935741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.935783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.936031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.936076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.936330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.936390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.936608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.936649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.936868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.936911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.937084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.937145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.937387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.937447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.937672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.937714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.937954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.938015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.938248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.938309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.938544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.938602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.938879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.938931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.939201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.939260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.939476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.939535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.939690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.939741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.939965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.940034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.940249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.940308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.940558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.940615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.940819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.940881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.941164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.941222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.941424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.941465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.941717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.941781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.942012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.942072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.942305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.942364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.942630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.942688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.942925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.942966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.943118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.943181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.943418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.943476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.943734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.943776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.943942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.943983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.944201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.944263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.944537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.944596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.944850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.944892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.945104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.945163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.945370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.945437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.963 qpair failed and we were unable to recover it. 00:28:24.963 [2024-12-08 06:32:14.945611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.963 [2024-12-08 06:32:14.945651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.945918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.945961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.946132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.946191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.946461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.946520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.946785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.946828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.947076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.947134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.947355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.947414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.947622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.947663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.947937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.947999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.948237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.948296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.948603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.948664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.948929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.948972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.949208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.949268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.949528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.949589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.949828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.949889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.950145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.950204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.950393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.950454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.950652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.950699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.950951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.951012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.951281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.951340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.951578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.951619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.951765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.951806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.952061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.952122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.952388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.952447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.952660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.952701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.952934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.952995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.953269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.953327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.953532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.953591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.953746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.953787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.954015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.954082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.954342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.954404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.954670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.954711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.954992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.955057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.955291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.955351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.955577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.955636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.955845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.955888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.956138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.956199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.956419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.956479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.964 [2024-12-08 06:32:14.956680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.964 [2024-12-08 06:32:14.956731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.964 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.956998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.957072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.957300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.957360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.957554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.957596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.957858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.957918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.958195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.958255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.958469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.958534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.958787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.958829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.959089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.959150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.959411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.959472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.959677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.959717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.959980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.960022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.960244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.960303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.960555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.960615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.960832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.960875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.961078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.961139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.961335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.961397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.961613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.961654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.961944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.962004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.962268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.962329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.962577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.962639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.962841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.962903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.963116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.963175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.963404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.963464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.963719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.963770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.964030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.964088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.964352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.964413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.964642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.964683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.964914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.964955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.965219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.965277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.965492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.965553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.965850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.965916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.966175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.966236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.966406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.966465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.965 qpair failed and we were unable to recover it. 00:28:24.965 [2024-12-08 06:32:14.966711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.965 [2024-12-08 06:32:14.966763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.966947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.967009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.967242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.967303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.967530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.967591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.967806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.967869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.968135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.968194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.968454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.968515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.968777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.968820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.969026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.969088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.969309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.969370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.969588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.969629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.969845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.969907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.970126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.970187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.970402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.970468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.970731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.970774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.971001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.971060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.971311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.971371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.971643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.971706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.971890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.971931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.972142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.972202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.972450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.972508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.972670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.972711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.972948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.973007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.973218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.973279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.973491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.973550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.973843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.973914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.974133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.974193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.974466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.974527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.974779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.974847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.975045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.975104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.975362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.975421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.975596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.975637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.975904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.975965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.976190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.976251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.976445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.976503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.976754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.976795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.977012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.966 [2024-12-08 06:32:14.977080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.966 qpair failed and we were unable to recover it. 00:28:24.966 [2024-12-08 06:32:14.977346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.977407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.977681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.977757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.977968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.978009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.978178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.978245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.978509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.978567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.978787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.978857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.979089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.979149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.979364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.979426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.979671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.979712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.979982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.980043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.980255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.980317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.980518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.980578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.980816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.980877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.981061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.981123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.981383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.981444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.981652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.981693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.981911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.981971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.982219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.982280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.982570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.982629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.982871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.982932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.983197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.983258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.983512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.983570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.983838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.983898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.984163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.984222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.984498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.984559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.984816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.984876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.985133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.985191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.985450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.985510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.985769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.985812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.985996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.986058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.986300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.986361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.986620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.986662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.986907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.986964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.987222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.987283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.987510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.987571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.987789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.987860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.988120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.967 [2024-12-08 06:32:14.988182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.967 qpair failed and we were unable to recover it. 00:28:24.967 [2024-12-08 06:32:14.988455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.988514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.988715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.988766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.989030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.989096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.989360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.989421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.989633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.989674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.989908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.989950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.990223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.990283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.990504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.990569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.990825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.990885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.991153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.991214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.991472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.991534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.991787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.991829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.992040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.992098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.992354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.992414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.992667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.992708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.992972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.993013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.993227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.993285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.993542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.993603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.993872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.993914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.994175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.994234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.994497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.994557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.994741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.994783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.994992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.995057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.995283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.995342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.995576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.995616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.995825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.995885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.996150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.996209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.996431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.996491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.996715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.996765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.997024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.997086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.997303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.997364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.997623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.997683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.997923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.997963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.998227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.998286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.998528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.998589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.998853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.998914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.999184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.999245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.999459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.968 [2024-12-08 06:32:14.999521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.968 qpair failed and we were unable to recover it. 00:28:24.968 [2024-12-08 06:32:14.999773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:14.999815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.000053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.000113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.000372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.000433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.000684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.000735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.000954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.000995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.001221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.001280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.001499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.001558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.001793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.001862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.002119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.002179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.002448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.002507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.002788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.002830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.003084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.003145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.003362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.003423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.003655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.003696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.003962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.004005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.004273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.004332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.004545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.004604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.004825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.004888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.005107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.005168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.005400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.005460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.005731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.005773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.005982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.006041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.006294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.006354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.006629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.006689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.006914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.006955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.007175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.007236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.007519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.007579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.007839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.007881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.008142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.008202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.008377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.008437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.008655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.008697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.008884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.008944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.009126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.009187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.009463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.009523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.009776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.009818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.010045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.010105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.010356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.010416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.010655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.969 [2024-12-08 06:32:15.010702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.969 qpair failed and we were unable to recover it. 00:28:24.969 [2024-12-08 06:32:15.010927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.010968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.011199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.011261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.011481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.011542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.011806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.011904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.012127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.012185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.012441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.012501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.012703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.012755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.013020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.013083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.013305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.013366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.013611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.013652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.013928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.013970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.014128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.014189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.014362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.014423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.014675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.014717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.014948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.015007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.015281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.015342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.015600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.015661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.015939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.015998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.016271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.016331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.016507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.016568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.016736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.016778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.017018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.017084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.017253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.017315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.017501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.017542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.017805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.017848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.018052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.018093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.018344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.018385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.018637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.018679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.018870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.018932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.019131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.019190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.019459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.019519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.019776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.019818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.020081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.020139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.970 [2024-12-08 06:32:15.020408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.970 [2024-12-08 06:32:15.020467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.970 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.020730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.020773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.020975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.021015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.021281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.021341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.021580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.021641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.021902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.021945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.022229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.022287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.022494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.022554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.022700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.022778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.023045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.023106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.023380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.023439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.023613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.023654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.023901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.023963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.024228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.024290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.024518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.024577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.024813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.024874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.025139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.025199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.025370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.025430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.025691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.025746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.025972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.026032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.026259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.026318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.026601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.026659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.026941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.027002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.027272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.027331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.027602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.027661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.027840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.027880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.028099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.028159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.028431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.028493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.028707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.028762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.029018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.029085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.029260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.029316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.029557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.029616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.029832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.029874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.030099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.030160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.030432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.030498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.030704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.030757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.031021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.031080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.031291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.031352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.971 [2024-12-08 06:32:15.031607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.971 [2024-12-08 06:32:15.031668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.971 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.031886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.031929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.032154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.032214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.032482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.032542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.032804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.032864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.033141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.033200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.033416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.033477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.033679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.033733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.034001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.034067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.034319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.034378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.034598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.034640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.034813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.034853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.035113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.035172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.035426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.035485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.035699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.035750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.036031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.036090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.036313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.036374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.036599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.036661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.036920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.036962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.037191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.037250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.037500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.037559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.037719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.037782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.038060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.038122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.038311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.038370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.038551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.038592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.038816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.038878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.039142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.039203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.039480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.039540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.039805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.039900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.040165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.040205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.040435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.040495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.040757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.040799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.040985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.041045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.041255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.041315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.041586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.041645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.041931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.041994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.042264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.042323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.042589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.042655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.972 qpair failed and we were unable to recover it. 00:28:24.972 [2024-12-08 06:32:15.042902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.972 [2024-12-08 06:32:15.042963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.043234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.043293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.043549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.043609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.043819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.043881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.044136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.044196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.044424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.044483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.044706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.044758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.045019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.045081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.045336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.045396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.045605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.045646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.045896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.045938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.046203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.046263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.046484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.046542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.046764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.046822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.047034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.047094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.047344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.047404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.047675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.047716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.048013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.048055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.048307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.048368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.048574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.048615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:24.973 [2024-12-08 06:32:15.048802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.973 [2024-12-08 06:32:15.048866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:24.973 qpair failed and we were unable to recover it. 00:28:25.244 [2024-12-08 06:32:15.049128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.244 [2024-12-08 06:32:15.049189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.244 qpair failed and we were unable to recover it. 00:28:25.244 [2024-12-08 06:32:15.049465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.244 [2024-12-08 06:32:15.049526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.244 qpair failed and we were unable to recover it. 00:28:25.244 [2024-12-08 06:32:15.049749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.244 [2024-12-08 06:32:15.049791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.244 qpair failed and we were unable to recover it. 00:28:25.244 [2024-12-08 06:32:15.050013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.244 [2024-12-08 06:32:15.050075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.244 qpair failed and we were unable to recover it. 00:28:25.244 [2024-12-08 06:32:15.050294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.244 [2024-12-08 06:32:15.050355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.244 qpair failed and we were unable to recover it. 00:28:25.244 [2024-12-08 06:32:15.050618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.244 [2024-12-08 06:32:15.050685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.244 qpair failed and we were unable to recover it. 00:28:25.244 [2024-12-08 06:32:15.050972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.244 [2024-12-08 06:32:15.051033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.244 qpair failed and we were unable to recover it. 00:28:25.244 [2024-12-08 06:32:15.051243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.244 [2024-12-08 06:32:15.051302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.244 qpair failed and we were unable to recover it. 00:28:25.244 [2024-12-08 06:32:15.051534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.244 [2024-12-08 06:32:15.051595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.244 qpair failed and we were unable to recover it. 00:28:25.244 [2024-12-08 06:32:15.051872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.244 [2024-12-08 06:32:15.051934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.244 qpair failed and we were unable to recover it. 00:28:25.244 [2024-12-08 06:32:15.052152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.052213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.052433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.052494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.052733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.052776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.053056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.053117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.053334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.053395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.053603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.053645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.053893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.053935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.054149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.054210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.054467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.054528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.054795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.054838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.055097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.055158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.055440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.055499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.055635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.055679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.055910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.055971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.056190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.056250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.056460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.056521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.056732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.056774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.057000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.057041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.057301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.057359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.057626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.057686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.057918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.057960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.058220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.058280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.058493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.058555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.058822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.058885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.059160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.059220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.059441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.059501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.059765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.059807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.060077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.060137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.060404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.060464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.060675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.060716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.060933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.060975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.061209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.061269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.061552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.061612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.061825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.061868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.062097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.062159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.062377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.062437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.062706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.062767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.245 qpair failed and we were unable to recover it. 00:28:25.245 [2024-12-08 06:32:15.063044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.245 [2024-12-08 06:32:15.063102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.063291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.063352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.063612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.063672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.063950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.063992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.064249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.064307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.064518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.064578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.064869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.064929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.065145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.065205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.065473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.065533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.065815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.065877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.066144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.066203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.066481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.066543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.066712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.066783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.067056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.067117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.067368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.067429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.067675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.067716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.067957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.067998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.068167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.068229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.068485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.068544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.068812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.068854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.069031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.069102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.069278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.069338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.069583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.069624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.069898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.069961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.070195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.070254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.070518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.070582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.070852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.070921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.071194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.071256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.071468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.071527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.071709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.071766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.071989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.072057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.072329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.072388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.072641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.072684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.072886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.072928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.073097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.073157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.073430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.073491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.073750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.073791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.074010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.074071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.246 [2024-12-08 06:32:15.074286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.246 [2024-12-08 06:32:15.074347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.246 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.074617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.074678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.074957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.075021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.075308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.075368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.075588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.075629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.075881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.075924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.076154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.076225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.076462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.076523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.076755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.076797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.077063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.077124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.077449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.077508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.077776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.077819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.078044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.078105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.078327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.078388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.078604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.078645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.078908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.078951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.079230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.079292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.079544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.079603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.079769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.079809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.080026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.080085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.080341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.080402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.080662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.080704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.080971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.081035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.081265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.081324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.081591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.081650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.081930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.081993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.082261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.082321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.082583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.082644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.082857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.082919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.083177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.083248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.083477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.083538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.083753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.083795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.084052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.084097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.084324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.084388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.084543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.084584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.084833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.084876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.085138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.085180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.085394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.247 [2024-12-08 06:32:15.085439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.247 qpair failed and we were unable to recover it. 00:28:25.247 [2024-12-08 06:32:15.085691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.085753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.086036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.086097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.086309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.086373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.086590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.086630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.086851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.086894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.087160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.087224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.087499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.087559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.087742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.087784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.087988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.088051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.088285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.088349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.088511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.088551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.088820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.088883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.089161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.089220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.089495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.089567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.089800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.089863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.090111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.090170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.090387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.090445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.090719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.090781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.091004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.091073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.091289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.091353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.091560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.091602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.091818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.091882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.092115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.092176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.092445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.092505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.092758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.092800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.093015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.093080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.093336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.093398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.093659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.093700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.093904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.093946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.094173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.094241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.094457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.094518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.094783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.094825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.095092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.095154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.095317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.095382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.095575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.095615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.095792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.095855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.096086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.096145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.096385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.096427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.248 [2024-12-08 06:32:15.096687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.248 [2024-12-08 06:32:15.096741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.248 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.097016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.097077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.097311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.097377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.097558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.097602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.097831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.097900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.098162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.098229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.098401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.098462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.098740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.098786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.099021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.099081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.099303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.099366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.099587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.099629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.099886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.099928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.100210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.100271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.100513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.100576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.100818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.100878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.101160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.101221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.101476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.101537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.101791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.101833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.101999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.102070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.102232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.102295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.102511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.102572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.102855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.102924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.103195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.103256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.103510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.103574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.103744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.103784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.103999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.104074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.104358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.104422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.104650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.104692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.104972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.105044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.105272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.105333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.105553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.105617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.105890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.105952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.106213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.106274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.249 qpair failed and we were unable to recover it. 00:28:25.249 [2024-12-08 06:32:15.106556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.249 [2024-12-08 06:32:15.106601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.106817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.106884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.107164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.107225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.107496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.107556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.107788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.107861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.108108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.108149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.108354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.108394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.108641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.108682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.108964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.109027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.109299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.109360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.109578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.109619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.109881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.109943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.110223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.110286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.110563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.110625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.110840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.110905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.111155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.111216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.111436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.111500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.111710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.111762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.111964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.112026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.112242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.112302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.112580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.112648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.112877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.112940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.113203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.113262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.113526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.113585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.113759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.113799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.113963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.114025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.114272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.114332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.114597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.114638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.114908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.114970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.115324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.115427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.115774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.115820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.116097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.116161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.116469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.116537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.116830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.116876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.117107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.117172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.117484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.117549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.117856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.117900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.118153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.118196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.250 qpair failed and we were unable to recover it. 00:28:25.250 [2024-12-08 06:32:15.118503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.250 [2024-12-08 06:32:15.118568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.118833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.118876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.119090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.119134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.119367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.119433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.119756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.119824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.120087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.120153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.120438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.120514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.120792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.120835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.121053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.121118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.121417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.121482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.121768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.121831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.122019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.122062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.122313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.122377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.122677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.122770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.122972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.123012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.123262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.123305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.123562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.123627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.123947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.123989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.124221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.124264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.124451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.124518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.124825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.124867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.125142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.125208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.125548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.125615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.125943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.125986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.126195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.126260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.126519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.126586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.126887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.126931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.127136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.127177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.127392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.127457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.127772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.127815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.128069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.128151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.128479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.128547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.128843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.128886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.129100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.129165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.129373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.129438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.129757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.129825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.130122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.130188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.251 [2024-12-08 06:32:15.130452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.251 [2024-12-08 06:32:15.130517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.251 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.130845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.130927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.131192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.131258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.131568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.131633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.131965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.132031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.132353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.132419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.132753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.132822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.133123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.133188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.133535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.133602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.133911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.133986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.134293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.134361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.134674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.134758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.135019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.135084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.135419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.135486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.135793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.135860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.136156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.136219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.136527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.136593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.136914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.136994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.137306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.137370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.137631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.137697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.138006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.138072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.138415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.138495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.138809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.138877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.139172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.139239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.139503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.139568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.139879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.139955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.140262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.140327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.140626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.140690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.140970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.141035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.141329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.141395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.141663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.141765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.142072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.142138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.142390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.142454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.142763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.142832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.252 [2024-12-08 06:32:15.143089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.252 [2024-12-08 06:32:15.143166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.252 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.143464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.143529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.143833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.143900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.144214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.144293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.144542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.144617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.144893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.144960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.145269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.145335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.145646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.145739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.146072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.146140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.146425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.146489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.146749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.146835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.147093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.147175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.147489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.147555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.147813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.147881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.148201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.148267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.148585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.148649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.148928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.148994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.149317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.149399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.149719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.149803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.150081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.150147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.150447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.150513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.150812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.150880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.151223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.151299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.151612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.151677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.152001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.152066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.152325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.152393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.152677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.152762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.153000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.153065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.153331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.153395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.153695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.153788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.154078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.154143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.154401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.154468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.154769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.154837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.155101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.155180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.155467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.155533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.155853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.155921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.156172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.156237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.156491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.253 [2024-12-08 06:32:15.156556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.253 qpair failed and we were unable to recover it. 00:28:25.253 [2024-12-08 06:32:15.156880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.156948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.157260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.157325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.157584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.157659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.157986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.158063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.158361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.158428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.158690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.158775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.159075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.159140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.159409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.159474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.159756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.159826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.160141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.160207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.160508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.160574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.160883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.160950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.161258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.161334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.161641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.161706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.162045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.162111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.162391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.162457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.162710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.162791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.163055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.163119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.163428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.163493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.163762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.163828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.164133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.164198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.164466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.164532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.164798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.164865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.165174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.165239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.165534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.165599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.165910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.165976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.166297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.166362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.166664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.166744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.167047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.167112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.167425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.167490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.167759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.167826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.168137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.168201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.168466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.168532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.254 [2024-12-08 06:32:15.168833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.254 [2024-12-08 06:32:15.168901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.254 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.169159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.169223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.169530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.169596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.169899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.169965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.170261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.170325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.170637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.170702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.171028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.171094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.171357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.171421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.171669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.171748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.172014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.172090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.172364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.172427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.172766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.172833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.173139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.173203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.173509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.173574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.173878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.173945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.174253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.174318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.174625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.174688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.174959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.175025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.175279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.175343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.175641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.175705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.175979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.176044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.176363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.176428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.176745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.176811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.177128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.177194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.177491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.177556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.177869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.177936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.178234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.178299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.178600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.178666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.178980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.179046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.179308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.179373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.179672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.179749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.180006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.180070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.180344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.180409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.180692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.180789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.181052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.181117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.181411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.181475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.181789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.181857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.182168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.182233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.255 qpair failed and we were unable to recover it. 00:28:25.255 [2024-12-08 06:32:15.182547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.255 [2024-12-08 06:32:15.182610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.182925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.182992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.183262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.183327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.183624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.183687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.183990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.184056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.184361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.184426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.184674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.184752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.185016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.185081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.185347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.185412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.185678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.185761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.186071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.186135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.186358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.186433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.186756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.186821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.187122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.187188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.187489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.187554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.187851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.187919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.188139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.188204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.188506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.188570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.188873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.188941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.189183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.189247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.189548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.189613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.189928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.189995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.190291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.190356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.190649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.190714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.191048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.191114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.191377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.191441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.191699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.191785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.192085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.192151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.192449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.192514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.192824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.192891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.193194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.193259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.193535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.193599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.193870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.193936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.194150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.194213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.194523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.194587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.194894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.194960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.195265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.195329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.195634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.256 [2024-12-08 06:32:15.195699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.256 qpair failed and we were unable to recover it. 00:28:25.256 [2024-12-08 06:32:15.195986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.196052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.196317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.196381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.196632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.196696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.197031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.197097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.197399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.197464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.197742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.197808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.198116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.198181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.198478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.198543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.198828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.198894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.199200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.199264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.199500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.199565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.199877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.199942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.200245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.200310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.200617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.200691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.201014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.201079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.201388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.201452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.201704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.201782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.202037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.202100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.202357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.202422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.202742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.202810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.203110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.203173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.203440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.203504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.203755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.203823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.204130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.204194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.204503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.204568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.204779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.204845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.205149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.205213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.205525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.205589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.205898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.205965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.206267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.206331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.206630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.206694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.207005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.207071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.207385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.207448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.207702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.207786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.208087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.208152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.208457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.208521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.208827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.257 [2024-12-08 06:32:15.208895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.257 qpair failed and we were unable to recover it. 00:28:25.257 [2024-12-08 06:32:15.209195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.209260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.209560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.209625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.209847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.209912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.210236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.210301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.210557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.210621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.210939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.211004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.211296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.211361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.211654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.211719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.211996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.212061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.212357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.212422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.212679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.212762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.213069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.213132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.213449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.213514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.213768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.213836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.214044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.214108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.214428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.214492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.214798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.214876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.215137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.215201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.215500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.215563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.215832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.215899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.216169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.216232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.216542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.216607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.216882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.216948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.217257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.217322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.217576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.217640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.217962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.218029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.218342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.218406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.218697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.218783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.219093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.219158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.219467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.219530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.219848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.219914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.220179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.220244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.220555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.220619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.220957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.258 [2024-12-08 06:32:15.221024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.258 qpair failed and we were unable to recover it. 00:28:25.258 [2024-12-08 06:32:15.221327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.221391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.221688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.221775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.222089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.222154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.222457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.222520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.222801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.222867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.223178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.223243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.223503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.223568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.223876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.223942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.224241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.224307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.224628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.224692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.225012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.225077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.225384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.225449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.225750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.225815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.226077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.226142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.226436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.226501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.226822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.226887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.227192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.227257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.227561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.227625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.227887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.227953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.228189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.228253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.228495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.228560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.228776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.228841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.229049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.229123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.229341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.229406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.229608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.229671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.229918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.229985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.230216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.230292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.230530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.230593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.230840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.230905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.231154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.231218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.231410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.231473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.231661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.231740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.231998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.232063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.232282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.232345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.232524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.232588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.232835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.232901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.233164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.233227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.233474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.259 [2024-12-08 06:32:15.233549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.259 qpair failed and we were unable to recover it. 00:28:25.259 [2024-12-08 06:32:15.233773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.233839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.234058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.234122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.234362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.234426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.234645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.234709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.234940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.235004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.235210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.235274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.235509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.235573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.235800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.235866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.236095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.236159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.236346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.236409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.236647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.236711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.236964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.237030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.237257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.237322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.237533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.237597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.237784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.237851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.238059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.238123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.238318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.238382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.238614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.238678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.238906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.238970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.239195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.239260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.239497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.239561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.239788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.239853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.240095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.240161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.240366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.240430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.240645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.240719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.240949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.240983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.241099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.241134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.241277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.241312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.241446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.241480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.241615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.241649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.241778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.241813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.241948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.241982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.242117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.242150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.242272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.242306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.242415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.242449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.242594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.242627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.242815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.242848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.260 [2024-12-08 06:32:15.242984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.260 [2024-12-08 06:32:15.243017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.260 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.243157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.243189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.243314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.243347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.243482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.243515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.243687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.243719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.243878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.243911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.244057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.244090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.244248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.244280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.244451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.244496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.244640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.244672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.244835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.244868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.244993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.245025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.245181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.245214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.245376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.245419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.245592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.245658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.245881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.245913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.246073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.246136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.246384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.246448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.246681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.246784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.246899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.246929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.247076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.247140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.247371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.247435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.247642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.247705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.247872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.247902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.248077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.248144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.248381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.248443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.248682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.248780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.248918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.248953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.249110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.249174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.249420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.249483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.249716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.249787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.249912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.249942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.250094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.250156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.250365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.250429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.250637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.250701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.250929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.250959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.251145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.251209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.251456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.251519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.261 qpair failed and we were unable to recover it. 00:28:25.261 [2024-12-08 06:32:15.251771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.261 [2024-12-08 06:32:15.251801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.251936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.251966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.252155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.252218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.252450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.252514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.252782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.252813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.252973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.253003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.253124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.253187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.253386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.253417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.253543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.253574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.253823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.253854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.253959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.253988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.254124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.254154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.254285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.254353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.254566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.254630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.254854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.254919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.255127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.255157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.255269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.255299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.255502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.255565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.255776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.255842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.256098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.256162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.256397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.256460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.256701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.256780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.257012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.257077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.257255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.257318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.257525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.257589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.257801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.257868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.258098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.258162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.258364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.258429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.258634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.258698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.258918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.258991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.259162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.259226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.259419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.259483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.259684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.259767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.259972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.260037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.260272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.260338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.260543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.260606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.260799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.260865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.261074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.262 [2024-12-08 06:32:15.261139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.262 qpair failed and we were unable to recover it. 00:28:25.262 [2024-12-08 06:32:15.261351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.261415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.261620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.261684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.261910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.261974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.262213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.262277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.262477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.262541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.262753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.262818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.263059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.263123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.263328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.263392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.263599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.263662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.263881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.263947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.264187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.264251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.264458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.264520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.264754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.264819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.265049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.265113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.265351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.265414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.265615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.265678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.265929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.265995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.266195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.266258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.266484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.266549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.266787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.266855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.267099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.267163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.267396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.267460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.267697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.267784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.268014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.268078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.268317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.268380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.268586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.268650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.268876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.268941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.269173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.269236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.269469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.269533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.269770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.269836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.270079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.270142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.270361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.270435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.270639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.270703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.270934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.263 [2024-12-08 06:32:15.270998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.263 qpair failed and we were unable to recover it. 00:28:25.263 [2024-12-08 06:32:15.271209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.271273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.271488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.271553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.271782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.271846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.272057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.272121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.272350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.272414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.272625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.272688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.272918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.272982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.273181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.273247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.273475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.273539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.273753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.273819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.274047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.274109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.274322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.274387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.274603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.274667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.274937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.275003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.275217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.275281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.275480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.275544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.275715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.275802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.276040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.276104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.276294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.276357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.276587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.276651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.276873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.276938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.277146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.277209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.277449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.277513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.277745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.277812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.278028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.278092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.278296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.278360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.278528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.278592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.278829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.278894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.279124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.279189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.279398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.279462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.279660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.279737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.279940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.280005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.280201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.280265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.280469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.280533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.280765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.280831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.281034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.281098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.281302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.281366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.264 qpair failed and we were unable to recover it. 00:28:25.264 [2024-12-08 06:32:15.281600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.264 [2024-12-08 06:32:15.281673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.281922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.281986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.282212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.282275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.282481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.282545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.282774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.282839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.283044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.283108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.283307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.283370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.283553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.283617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.283868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.283933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.284159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.284223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.284463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.284528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.284755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.284819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.285021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.285085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.285287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.285351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.285566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.285629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.285877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.285942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.286183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.286248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.286475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.286537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.286753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.286818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.287057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.287121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.287328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.287392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.287575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.287639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.287899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.287965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.288174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.288237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.288435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.288499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.288754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.288821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.289054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.289116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.289299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.289364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.289560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.289625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.289880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.289945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.290152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.290216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.290441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.290505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.290678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.290792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.291003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.291067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.291270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.291333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.291537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.291600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.291836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.291901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.292081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.265 [2024-12-08 06:32:15.292145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.265 qpair failed and we were unable to recover it. 00:28:25.265 [2024-12-08 06:32:15.292376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.292439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.292640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.292702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.292951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.293015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.293258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.293323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.293498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.293561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.293790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.293855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.294065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.294128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.294332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.294395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.294590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.294653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.294907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.294973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.295216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.295278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.295488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.295552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.295782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.295849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.296075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.296138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.296367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.296431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.296668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.296747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.296966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.297029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.297260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.297324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.297553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.297617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.297832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.297896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.298107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.298171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.298370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.298434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.298665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.298761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.298998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.299062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.299294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.299359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.299590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.299653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.299887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.299953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.300184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.300249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.300485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.300549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.300780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.300856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.301100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.301162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.301365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.301429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.301632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.301696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.301950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.302014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.302247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.302311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.302547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.302611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.302866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.302931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.303155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.266 [2024-12-08 06:32:15.303219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.266 qpair failed and we were unable to recover it. 00:28:25.266 [2024-12-08 06:32:15.303459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.303523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.303743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.303807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.304048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.304112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.304316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.304380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.304605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.304667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.304928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.304993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.305199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.305263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.305492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.305555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.305782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.305848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.306063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.306127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.306358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.306420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.306621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.306685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.306953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.307017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.307246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.307309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.307512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.307576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.307785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.307851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.308080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.308143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.308346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.308409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.308596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.308660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.308887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.308951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.309153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.309216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.309392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.309457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.309655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.309718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.309946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.310010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.310213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.310278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.310484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.310547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.310763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.310828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.311029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.311093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.311324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.311387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.311589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.311652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.311867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.311933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.312165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.312239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.312439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.312502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.312702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.267 [2024-12-08 06:32:15.312782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.267 qpair failed and we were unable to recover it. 00:28:25.267 [2024-12-08 06:32:15.312986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.313048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.313283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.313347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.313586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.313650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.314668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.314888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.315286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.315353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.315527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.315591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.315828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.315894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.316104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.316167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.316393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.316457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.316659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.316737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.316975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.317039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.317284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.317349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.317577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.317639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.317860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.317925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.318165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.318228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.318457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.318519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.318753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.318818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.319053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.319118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.319319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.319382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.319558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.319621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.319852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.319918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.320156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.320219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.320418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.320482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.320685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.320768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.321020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.321084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.321346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.321413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.321640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.321704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.321966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.322031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.322264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.322328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.322533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.322598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.322830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.322909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.323125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.323190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.323393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.323458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.323665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.323746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.323987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.324052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.324305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.324373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.324627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.268 [2024-12-08 06:32:15.324692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.268 qpair failed and we were unable to recover it. 00:28:25.268 [2024-12-08 06:32:15.324956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.325031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.325243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.325307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.325543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.325620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.325871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.325939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.326181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.326245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.326459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.326523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.326743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.326809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.327015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.327079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.327268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.327341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.327590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.327656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.327910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.327976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.328206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.328270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.328501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.328566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.328824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.328893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.329115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.329179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.329385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.329450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.329649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.329714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.329968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.330032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.330233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.330311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.330561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.330628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.330864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.330931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.331138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.331202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.331408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.331471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.331708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.331804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.332015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.332080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.332305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.332368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.332607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.332671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.332909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.332975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.333221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.333287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.333506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.333572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.333772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.333839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.334037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.334101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.334339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.334402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.334636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.334704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.334932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.334998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.335203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.335268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.335478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.335542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.269 [2024-12-08 06:32:15.335774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.269 [2024-12-08 06:32:15.335840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.269 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.336047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.336125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.336365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.336440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.336678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.336771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.336981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.337060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.337304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.337369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.337600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.337665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.337920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.337985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.338222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.338286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.338540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.338607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.338836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.338902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.339133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.339197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.339397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.339461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.339660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.339742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.339966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.340031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.340252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.340317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.340518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.340582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.340825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.340891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.341093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.341156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.341385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.341453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.341704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.341785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.342019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.342083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.342312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.342376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.342557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.342622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.342858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.342924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.343173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.343238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.343444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.343509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.343780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.343847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.344076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.344155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.344374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.344440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.344688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.344772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.345004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.345068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.345299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.345363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.345606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.345685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.345957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.346023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.346236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.346300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.346534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.346598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.346814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.346889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.270 [2024-12-08 06:32:15.347104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.270 [2024-12-08 06:32:15.347172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.270 qpair failed and we were unable to recover it. 00:28:25.271 [2024-12-08 06:32:15.347370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.271 [2024-12-08 06:32:15.347437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.271 qpair failed and we were unable to recover it. 00:28:25.271 [2024-12-08 06:32:15.347666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.271 [2024-12-08 06:32:15.347756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.271 qpair failed and we were unable to recover it. 00:28:25.271 [2024-12-08 06:32:15.347984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.271 [2024-12-08 06:32:15.348045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.271 qpair failed and we were unable to recover it. 00:28:25.271 [2024-12-08 06:32:15.348281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.271 [2024-12-08 06:32:15.348346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.271 qpair failed and we were unable to recover it. 00:28:25.271 [2024-12-08 06:32:15.348573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.271 [2024-12-08 06:32:15.348653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.271 qpair failed and we were unable to recover it. 00:28:25.271 [2024-12-08 06:32:15.348885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.271 [2024-12-08 06:32:15.348952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.271 qpair failed and we were unable to recover it. 00:28:25.271 [2024-12-08 06:32:15.349175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.271 [2024-12-08 06:32:15.349243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.271 qpair failed and we were unable to recover it. 00:28:25.271 [2024-12-08 06:32:15.349421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.271 [2024-12-08 06:32:15.349486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.271 qpair failed and we were unable to recover it. 00:28:25.271 [2024-12-08 06:32:15.349740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.271 [2024-12-08 06:32:15.349806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.271 qpair failed and we were unable to recover it. 00:28:25.271 [2024-12-08 06:32:15.350038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.271 [2024-12-08 06:32:15.350122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.271 qpair failed and we were unable to recover it. 00:28:25.271 [2024-12-08 06:32:15.350376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.271 [2024-12-08 06:32:15.350442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.271 qpair failed and we were unable to recover it. 00:28:25.546 [2024-12-08 06:32:15.350649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.546 [2024-12-08 06:32:15.350714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.546 qpair failed and we were unable to recover it. 00:28:25.546 [2024-12-08 06:32:15.350949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.546 [2024-12-08 06:32:15.351014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.546 qpair failed and we were unable to recover it. 00:28:25.546 [2024-12-08 06:32:15.351182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.546 [2024-12-08 06:32:15.351249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.546 qpair failed and we were unable to recover it. 00:28:25.546 [2024-12-08 06:32:15.351477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.546 [2024-12-08 06:32:15.351544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.546 qpair failed and we were unable to recover it. 00:28:25.546 [2024-12-08 06:32:15.351762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.546 [2024-12-08 06:32:15.351831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.546 qpair failed and we were unable to recover it. 00:28:25.546 [2024-12-08 06:32:15.352069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.546 [2024-12-08 06:32:15.352138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.546 qpair failed and we were unable to recover it. 00:28:25.546 [2024-12-08 06:32:15.352370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.546 [2024-12-08 06:32:15.352450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.546 qpair failed and we were unable to recover it. 00:28:25.546 [2024-12-08 06:32:15.352711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.546 [2024-12-08 06:32:15.352803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.546 qpair failed and we were unable to recover it. 00:28:25.546 [2024-12-08 06:32:15.353048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.546 [2024-12-08 06:32:15.353124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.546 qpair failed and we were unable to recover it. 00:28:25.546 [2024-12-08 06:32:15.353360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.546 [2024-12-08 06:32:15.353437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.546 qpair failed and we were unable to recover it. 00:28:25.546 [2024-12-08 06:32:15.353676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.546 [2024-12-08 06:32:15.353775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.546 qpair failed and we were unable to recover it. 00:28:25.546 [2024-12-08 06:32:15.354022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.546 [2024-12-08 06:32:15.354099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.546 qpair failed and we were unable to recover it. 00:28:25.546 [2024-12-08 06:32:15.354353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.546 [2024-12-08 06:32:15.354417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.546 qpair failed and we were unable to recover it. 00:28:25.546 [2024-12-08 06:32:15.354657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.546 [2024-12-08 06:32:15.354744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.546 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.354993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.355059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.355293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.355356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.355594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.355659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.355910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.355976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.356231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.356295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.356518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.356582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.356804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.356869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.357076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.357140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.357382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.357446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.357690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.357774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.357989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.358055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.358286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.358350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.358581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.358644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.358899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.358964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.359209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.359275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.359528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.359593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.359823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.359890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.360125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.360198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.360441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.360504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.360757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.360836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.361064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.361129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.361333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.361396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.361598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.361662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.361910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.361978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.362202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.362266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.362505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.362569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.362805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.362872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.363080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.363143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.363374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.363440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.363687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.363767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.363975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.364038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.364268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.364332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.364559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.364623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.364894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.364970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.365176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.365241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.365479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.365543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.365751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.365816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.366061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.366125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.366367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.366432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.366647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.366713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.366974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.367039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.367271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.547 [2024-12-08 06:32:15.367334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.547 qpair failed and we were unable to recover it. 00:28:25.547 [2024-12-08 06:32:15.367543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.367607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.367917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.367989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.368192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.368256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.368490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.368554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.368771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.368838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.369049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.369125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.369329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.369405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.369627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.369692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.369942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.370007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.370193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.370257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.370456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.370519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.370780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.370846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.371098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.371163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.371372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.371435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.371682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.371761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.372002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.372067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.372296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.372361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.372603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.372685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.372949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.373014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.373245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.373308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.373540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.373617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.373886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.373954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.374165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.374228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.374462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.374527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.374745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.374812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.375043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.375109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.375345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.375411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.375664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.375763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.376002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.376067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.376270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.376334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.376585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.376650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.376867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.376935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.377168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.377233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.377412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.377476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.377701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.377784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.378021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.378098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.378317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.378382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.548 [2024-12-08 06:32:15.378613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.548 [2024-12-08 06:32:15.378677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.548 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.378936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.379000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.379228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.379292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.379522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.379588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.379768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.379846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.380035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.380100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.380312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.380376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.380588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.380652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.380879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.380949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.381207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.381276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.381536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.381599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.381844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.381910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.382140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.382204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.382436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.382506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.382703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.382803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.383067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.383132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.383335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.383399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.383574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.383638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.383863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.383930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.384165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.384232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.384441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.384518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.384747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.384813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.385014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.385078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.385288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.385351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.385557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.385627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.385881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.385948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.386180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.386244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.386481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.386545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.386766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.386832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.387038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.387103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.387335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.387400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.387634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.387699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.387955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.388019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.388249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.388313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.388557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.388624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.388870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.388937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.389144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.389208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.389411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.389474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.549 qpair failed and we were unable to recover it. 00:28:25.549 [2024-12-08 06:32:15.389704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-12-08 06:32:15.389783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.390019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.390086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.390332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.390398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.390631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.390696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.390954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.391019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.391249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.391314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.391535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.391614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.391833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.391901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.392138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.392203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.392436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.392514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.392773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.392841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.393025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.393091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.393262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.393327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.393567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.393632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.393896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.393962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.394162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.394226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.394427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.394492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.394744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.394812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.395024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.395089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.395267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.395331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.395542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.395606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.395814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.395879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.396083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.396164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.396380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.396447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.396685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.396768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.396980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.397044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.397271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.397334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.397562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.397640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.397901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.397969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.398172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.398236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.398466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.398529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.398753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.398819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.399056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.399123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.399355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.399420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.399622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.399686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.399956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.400022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.400272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.400335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.550 qpair failed and we were unable to recover it. 00:28:25.550 [2024-12-08 06:32:15.400567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-12-08 06:32:15.400643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.400872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.400940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.401145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.401208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.401415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.401478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.401710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.401795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.402036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.402102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.402347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.402413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.402646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.402710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.402965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.403029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.403260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.403324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.403562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.403630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.403884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.403951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.404195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.404261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.404467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.404530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.404768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.404835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.405050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.405117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.405358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.405423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.405654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.405718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.405989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.406048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.406246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.406309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.406522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.406587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.406802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.406869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.407100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.407164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.407336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.407399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.407628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.407691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.407918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.407998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.408247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.408311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.408541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.408604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.408808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.408876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.409076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.409139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.409316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.409382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.409624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-12-08 06:32:15.409689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.551 qpair failed and we were unable to recover it. 00:28:25.551 [2024-12-08 06:32:15.409919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.409983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.410184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.410248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.410447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.410511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.410718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.410798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.411052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.411117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.411351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.411415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.411642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.411704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.411969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.412036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.412286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.412362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.412574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.412637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.412891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.412958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.413125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.413189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.413417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.413481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.413783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.413862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.414082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.414146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.414380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.414445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.414649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.414713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.414973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.415041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.415266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.415330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.415573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.415638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.415907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.415973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.416185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.416250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.416484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.416561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.416806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.416880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.417130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.417195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.417432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.417495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.417740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.417805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.417987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.418069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.418313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.418388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.418640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.418704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.418970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.419035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.419266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.419330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.419531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.419605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.419844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.419932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.420144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.420207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.420412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.420477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.420673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.420755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.420992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.421056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.421273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.421339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.421552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.421619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.421850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.421916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.422153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.552 [2024-12-08 06:32:15.422217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.552 qpair failed and we were unable to recover it. 00:28:25.552 [2024-12-08 06:32:15.422413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.422477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.422717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.422803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.423052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.423116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.423355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.423418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.423651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.423715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.423955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.424035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.424281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.424356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.424571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.424635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.424908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.424974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.425150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.425213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.425440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.425515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.425777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.425856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.426075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.426138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.426343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.426407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.426569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.426632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.426884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.426959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.427201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.427269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.427497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.427562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.427821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.427887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.428090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.428154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.428359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.428423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.428627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.428691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.428925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.428991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.429200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.429264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.429492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.429555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.429774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.429840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.430019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.430085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.430325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.430391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.430568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.430632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.430875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.430939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.431173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.431236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.431440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.431520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.431757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.431836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.432015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.432079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.432254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.432320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.432520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.432583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.432828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.432893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.433088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.433156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.433394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.433460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.433636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.433699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.433918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.433982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.434215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.434279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.434510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.434575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.434788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.434855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.435060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.435123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.435307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.435371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.435573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.435636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.435894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.435968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.553 [2024-12-08 06:32:15.436210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.553 [2024-12-08 06:32:15.436277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.553 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.436453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.436518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.436717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.436802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.437038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.437102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.437300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.437380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.437626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.437699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.437943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.438008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.438234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.438296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.438500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.438563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.438776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.438844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.439056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.439122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.439332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.439396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.439641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.439705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.439954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.440018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.440229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.440302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.440558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.440623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.440871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.440936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.441196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.441262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.441460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.441524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.441743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.441809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.442044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.442109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.442313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.442375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.442588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.442655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.442885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.442952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.443168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.443232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.443430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.443494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.443743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.443810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.444045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.444111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.444317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.444383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.444591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.444656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.444881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.444947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.445150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.445214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.445448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.445516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.445772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.445840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.446063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.446129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.446344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.446409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.446609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.446674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.446934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.447007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.447252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.447316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.447522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.447587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.447794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.447860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.448069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.448133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.448393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.448459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.448690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.448770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.448972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.449038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.449229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.449293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.449492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.449556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.554 qpair failed and we were unable to recover it. 00:28:25.554 [2024-12-08 06:32:15.449769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.554 [2024-12-08 06:32:15.449849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.450064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.450128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.450296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.450360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.450587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.450662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.450858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.450923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.451158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.451225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.451466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.451534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.451709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.451810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.452026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.452091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.452324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.452389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.452557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.452624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.452878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.452945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.453129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.453194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.453398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.453463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.453635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.453698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.453931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.453996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.454249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.454325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.454525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.454590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.454771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.454838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.455043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.455107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.455281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.455345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.455557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.455628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.455850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.455917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.456117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.456181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.456415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.456479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.456710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.456788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.456992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.457072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.457319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.457385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.457572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.457635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.457859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.457925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.458148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.458213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.458441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.458506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.458715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.458824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.459077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.459141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.459378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.459441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.459644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.459708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.459956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.460020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.460269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.460342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.460582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.460646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.460879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.460946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.461182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.461245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.461448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.461513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.461719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.461806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.461996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.462071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.462277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.462342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.555 qpair failed and we were unable to recover it. 00:28:25.555 [2024-12-08 06:32:15.462570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.555 [2024-12-08 06:32:15.462634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.462829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.462893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.463123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.463190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.463397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.463463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.463695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.463779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.463958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.464023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.464250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.464315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.464552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.464625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.464851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.464918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.465161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.465225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.465429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.465493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.465752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.465818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.466038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.466103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.466312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.466379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.466582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.466646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.466850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.466915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.467096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.467160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.467363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.467428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.467674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.467786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.467984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.468048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.468271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.468335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.468566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.468630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.468826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.468904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.469126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.469204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.469441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.469506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.469743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.469810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.470042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.470105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.470336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.470416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.470637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.470703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.470960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.471025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.471251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.471315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.471521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.471585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.471786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.471853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.472049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.472119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.472348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.472412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.472624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.472687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.472907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.472972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.473215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.473282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.473494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.473572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.473816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.473887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.474146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.474212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.474446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.474511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.474691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.474773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.474956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.475021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.475198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.475263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.475472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.475536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.475757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.475824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.476028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.476091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.476329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.476393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.476627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.476690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.556 [2024-12-08 06:32:15.476939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.556 [2024-12-08 06:32:15.477002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.556 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.477210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.477274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.477529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.477593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.477798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.477863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.478103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.478167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.478402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.478466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.478689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.478767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.478973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.479037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.479263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.479326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.479562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.479625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.479851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.479916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.480152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.480216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.480421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.480483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.480714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.480798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.481006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.481070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.481288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.481352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.481590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.481654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.481906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.481973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.482144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.482208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.482377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.482442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.482670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.482754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.482982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.483046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.483226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.483290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.483495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.483560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.483745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.483811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.484010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.484074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.484283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.484348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.484553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.484616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.484837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.484913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.485147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.485211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.485409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.485473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.485674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.485756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.485970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.486034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.486240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.486303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.486477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.486541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.486750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.486816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.487049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.487112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.487339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.487403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.487643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.487707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.487969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.488033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.488210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.488273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.488469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.488533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.488781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.488848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.489088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.489152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.489359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.489423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.489582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.489646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.489890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.489955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.490158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.490222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.490424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.490488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.490717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.490805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.557 [2024-12-08 06:32:15.491036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.557 [2024-12-08 06:32:15.491100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.557 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.491306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.491370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.491600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.491663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.491940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.492007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.492255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.492320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.492542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.492607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.492791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.492857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.493063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.493127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.493360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.493425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.493595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.493658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.493905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.493971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.494177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.494242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.494445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.494508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.494717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.494801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.495047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.495111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.495347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.495410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.495618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.495682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.495937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.496002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.496239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.496312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.496525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.496589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.496794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.496860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.497036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.497099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.497302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.497366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.497595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.497659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.497901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.497965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.498170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.498234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.498470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.498535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.498768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.498833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.499036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.499101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.499310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.499374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.499556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.499620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.499825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.499890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.500131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.500196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.500426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.500490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.500714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.500798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.501035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.501098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.501308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.501373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.501573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.501636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.501826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.501892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.502099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.502164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.502365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.502428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.502625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.502689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.502941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.503006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.503208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.503272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.558 [2024-12-08 06:32:15.503436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.558 [2024-12-08 06:32:15.503500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.558 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.503770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.503837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.504061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.504124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.504351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.504415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.504593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.504658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.504883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.504948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.505175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.505240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.505455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.505521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.505763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.505829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.506062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.506126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.506360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.506424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.506654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.506718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.506942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.507007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.507233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.507296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.507500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.507573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.507779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.507845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.508048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.508113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.508338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.508402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.508630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.508694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.508917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.508981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.509217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.509281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.509476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.509540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.509752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.509818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.510020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.510084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.510325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.510389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.510611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.510675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.510891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.510955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.511152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.511216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.511461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.511525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.511746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.511811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.512017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.512081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.512314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.512378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.512582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.512645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.512904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.512970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.513173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.513236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.513466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.513530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.513754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.513819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.514046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.514108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.514310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.514374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.514608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.514672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.514924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.514988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.515235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.515299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.515496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.515560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.515770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.515836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.516040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.516105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.516304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.516366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.516607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.516671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.516930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.516994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.517220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.517284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.517491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.559 [2024-12-08 06:32:15.517556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.559 qpair failed and we were unable to recover it. 00:28:25.559 [2024-12-08 06:32:15.517782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.517849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.518061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.518124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.518354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.518417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.518615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.518679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.518921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.518996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.519230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.519294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.519497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.519561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.519772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.519837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.520038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.520102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.520327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.520391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.520625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.520688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.520909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.520974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.521151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.521215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.521390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.521452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.521686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.521765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.521978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.522041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.522244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.522307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.522538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.522601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.522871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.522936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.523169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.523233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.523467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.523531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.523767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.523833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.524062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.524125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.524331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.524395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.524634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.524698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.524930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.524993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.525193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.525257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.525483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.525546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.525754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.525819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.526061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.526126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.526304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.526367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.526605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.526670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.526928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.526992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.527203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.527267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.527467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.527530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.527768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.527835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.528074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.528137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.528340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.528404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.528653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.528717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.528976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.529039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.529269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.529333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.529538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.529601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.529842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.529907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.530110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.530174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.530407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.530481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.530682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.530759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.530996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.531060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.531237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.531301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.531500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.560 [2024-12-08 06:32:15.531563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.560 qpair failed and we were unable to recover it. 00:28:25.560 [2024-12-08 06:32:15.531792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.531858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.532064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.532129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.532360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.532423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.532630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.532694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.532961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.533026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.533253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.533317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.533548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.533613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.533863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.533929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.534130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.534193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.534441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.534506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.534702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.534782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.534987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.535050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.535278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.535343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.535588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.535652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.535869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.535933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.536142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.536206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.536451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.536515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.536755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.536821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.537051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.537115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.537322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.537387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.537584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.537647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.537872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.537937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.538182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.538247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.538482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.538546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.538757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.538823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.539012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.539075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.539301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.539365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.539608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.539671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.539942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.540008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.540238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.540301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.540532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.540596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.540807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.540873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.541073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.541136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.541369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.541434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.541662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.541741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.541974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.542048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.542256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.542320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.542518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.542582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.542795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.542860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.543101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.543165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.543367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.543431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.543635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.543698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.543927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.543991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.544199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.544264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.544468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.544531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.544765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.544830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.545034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.545097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.545301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.545365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.545603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.545666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.561 [2024-12-08 06:32:15.545933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.561 [2024-12-08 06:32:15.545999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.561 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.546229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.546293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.546477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.546540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.546754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.546819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.547053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.547117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.547342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.547407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.547603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.547666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.547907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.547972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.548214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.548278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.548514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.548567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.548707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.548752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.548894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.548958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.549158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.549223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.549430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.549465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.549609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.549683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.549943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.550008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.550215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.550278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.550537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.550601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.550806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.550872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.551100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.551163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.551393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.551457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.551691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.551770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.552009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.552072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.552247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.552311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.552538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.552603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.552850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.552914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.553148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.553222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.553434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.553498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.553744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.553808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.554017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.554082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.554315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.554379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.554573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.554636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.554857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.554922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.555102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.555167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.555341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.555405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.555585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.555649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.555858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.555925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.556153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.556217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.556451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.556534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1180091 Killed "${NVMF_APP[@]}" "$@" 00:28:25.562 [2024-12-08 06:32:15.556811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.556910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.557168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.557249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.557522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.557606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:25.562 [2024-12-08 06:32:15.557862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.557949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:25.562 [2024-12-08 06:32:15.558236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.558324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.562 qpair failed and we were unable to recover it. 00:28:25.562 [2024-12-08 06:32:15.558553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.562 [2024-12-08 06:32:15.558634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:25.563 [2024-12-08 06:32:15.558913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.559001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:25.563 [2024-12-08 06:32:15.559246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.559331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.563 [2024-12-08 06:32:15.559585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.559673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.560016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb2570 is same with the state(6) to be set 00:28:25.563 [2024-12-08 06:32:15.560380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.560485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.560694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.560790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.560948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.560985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.561106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.561141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.561313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.561347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.561457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.561493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.561651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.561717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.561950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.562015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.562229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.562294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.562530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.562595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.562788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.562840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.562962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.562996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.563134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.563167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.563279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.563314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.563482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.563534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.563670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.563712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.563854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.563889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.563991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.564041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.564221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.564283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.564515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.564579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.564785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.564819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.564963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.564997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.565171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.565236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.565467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.565532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1180543 00:28:25.563 [2024-12-08 06:32:15.565786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.565822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:25.563 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1180543 00:28:25.563 [2024-12-08 06:32:15.565944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.565978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.566141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1180543 ']' 00:28:25.563 [2024-12-08 06:32:15.566176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.566290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.566325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.563 [2024-12-08 06:32:15.566491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.566526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.563 [2024-12-08 06:32:15.566662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.566697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.563 [2024-12-08 06:32:15.566824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.566858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.563 [2024-12-08 06:32:15.567002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.567037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.567174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.567209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.567372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.567406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.567515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.567548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.567660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.567693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.567842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.567876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.568049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.568088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.568205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.568238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.568378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.563 [2024-12-08 06:32:15.568409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.563 qpair failed and we were unable to recover it. 00:28:25.563 [2024-12-08 06:32:15.568544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.568578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.568714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.568759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.568877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.568912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.569053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.569086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.569222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.569255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.569393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.569428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.569573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.569607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.569718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.569761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.569907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.569942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.570090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.570123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.570259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.570292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.570442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.570476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.570641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.570676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.570812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.570844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.570940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.570973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.571115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.571146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.571256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.571287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.571423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.571456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.571571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.571604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.571745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.571779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.571917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.571949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.572088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.572121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.572233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.572266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.572427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.572459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.572602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.572633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.572753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.572786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.572905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.572937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.573099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.573130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.573266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.573299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.573426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.573458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.573591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.573624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.573736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.573785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.573921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.573952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.574046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.574078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.574211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.574242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.574375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.574406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.574539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.574570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.574705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.574748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.574862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.574893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.575010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.575040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.575176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.575207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.575343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.575375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.575507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.575538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.575664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.575695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.575830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.575861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.575965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.575995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.576098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.576128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.576265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.576295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.576396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.576426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.576556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.576586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.564 qpair failed and we were unable to recover it. 00:28:25.564 [2024-12-08 06:32:15.576735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.564 [2024-12-08 06:32:15.576766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.576874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.576905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.577031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.577060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.577199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.577230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.577363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.577394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.577483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.577513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.577647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.577676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.577792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.577822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.577919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.577948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.578081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.578110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.578240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.578270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.578369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.578400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.578535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.578564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.578718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.578758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.578883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.578912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.579040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.579068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.579200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.579228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.579388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.579416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.579551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.579579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.579680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.579709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.579853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.579881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.580023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.580051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.580184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.580212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.580342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.580370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.580525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.580554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.580658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.580687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.580845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.580893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.581051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.581102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.581241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.581273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.581402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.581434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.581559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.581594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.581741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.581776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.581873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.581902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.582030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.582059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.582217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.582247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.582348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.582379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.582550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.582595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.582708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.582747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.582870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.582900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.583031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.583059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.583180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.583208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.583346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.583376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.583505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.583534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.565 qpair failed and we were unable to recover it. 00:28:25.565 [2024-12-08 06:32:15.583639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.565 [2024-12-08 06:32:15.583667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.583803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.583831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.583958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.583986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.584139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.584166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.584285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.584313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.584413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.584440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.584565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.584593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.584685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.584713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.584821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.584850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.584945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.584972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.585066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.585095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.585226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.585254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.585388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.585417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.585567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.585594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.585700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.585736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.585861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.585888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.586012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.586038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.586166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.586193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.586332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.586357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.586480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.586505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.586628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.586655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.586756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.586782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.586870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.586895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.587020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.587046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.587139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.587170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.587296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.587322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.587421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.587447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.587543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.587569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.587726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.587753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.587861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.587887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.588009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.588034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.588159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.588184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.588282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.588308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.588432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.588458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.588554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.588580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.588709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.588742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.588845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.588871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.588969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.588996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.589120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.589146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.589286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.589313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.589441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.589467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.589590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.589617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.589712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.589743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.589838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.589864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.590015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.590040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.590145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.566 [2024-12-08 06:32:15.590170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.566 qpair failed and we were unable to recover it. 00:28:25.566 [2024-12-08 06:32:15.590287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.590313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.590431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.590457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.590585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.590612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.590737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.590763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.590859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.590885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.591016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.591043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.591143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.591170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.591290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.591316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.591413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.591439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.591589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.591615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.591751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.591779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.591874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.591900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.592029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.592055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.592183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.592209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.592307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.592334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.592486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.592511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.592663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.592689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.592826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.592853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.592949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.592979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.593073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.593099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.593220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.593246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.593401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.593428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.593552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.593577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.593670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.593696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.593803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.593830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.593930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.593956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.594085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.594111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.594259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.594285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.594378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.594403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.594528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.594553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.594697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.594747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.594861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.594903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.595075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.595102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.595231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.595258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.595382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.595412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.595512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.595537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.595638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.595665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.595772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.595801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.595901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.595928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.596042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.596070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.596193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.596220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.596338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.596367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.596486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.596513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.596641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.596667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.596794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.596821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.596925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.596958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.597078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.597104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.597198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.597224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.597351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.567 [2024-12-08 06:32:15.597377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.567 qpair failed and we were unable to recover it. 00:28:25.567 [2024-12-08 06:32:15.597498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.597527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.597654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.597680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.597786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.597813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.597904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.597930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.598082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.598110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.598232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.598258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.598358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.598385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.598539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.598565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.598727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.598756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.598879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.598905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.599033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.599062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.599155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.599181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.599303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.599332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.599455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.599481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.599610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.599636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.599767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.599795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.599923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.599950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.600070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.600096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.600219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.600246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.600341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.600368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.600493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.600519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.600641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.600671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.600780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.600807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.600933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.600959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.601092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.601120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.601248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.601274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.601398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.601425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.601538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.601565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.601692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.601719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.601827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.601854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.601979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.602007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.602158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.602184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.602303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.602330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.602422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.602451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.602577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.602603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.602693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.602719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.602835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.602862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.602968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.602999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.603134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.603160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.603279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.603305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.603393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.603422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.603522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.603548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.603681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.603708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.603809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.603836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.603939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.603965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.604084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.604111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.604236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.604262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.604388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.604414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.604532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.604558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.604666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.604692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.604796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.604826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.604923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.604949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.568 qpair failed and we were unable to recover it. 00:28:25.568 [2024-12-08 06:32:15.605052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.568 [2024-12-08 06:32:15.605077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.605171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.605198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.605326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.605352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.605475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.605501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.605631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.605657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.605751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.605778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.605880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.605909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.606042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.606068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.606198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.606224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.606344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.606370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.606501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.606527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.606649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.606675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.606804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.606835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.606940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.606967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.607090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.607115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.607234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.607259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.607382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.607408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.607546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.607571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.607709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.607741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.607837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.607863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.607967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.607993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.608159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.608198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.608355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.608380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.608540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.608579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.608738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.608765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.608857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.608884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.609013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.609054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.609224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.609249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.609414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.609438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.609527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.609551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.609696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.609742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.609860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.609886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.610029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.610055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.610175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.610200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.610338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.610363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.610499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.610525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.610747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.610803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.610911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.610938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.611064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.611092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.611237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.611263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.611415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.611441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.611544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.611571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.611696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.611732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.611834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.611861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.611957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.611983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.612145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.612171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.612315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.612340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.612477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.612502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.612630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.612671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.612763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.612790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.612890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.612916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.613044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.613069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.613211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.613251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.569 [2024-12-08 06:32:15.613392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.569 [2024-12-08 06:32:15.613418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.569 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.613543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.613569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.613713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.613760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.613867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.613894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.613989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.614015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.614108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.614134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.614268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.614293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.614409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.614435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.614559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.614584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.614690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.614715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.615422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.615451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.615592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.615618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.616329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.616358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.616522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.616553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.616691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.616742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.616842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.616868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.617035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.617061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.617234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.617259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.617363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.617388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.617565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.617591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.617691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.617716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.617835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.617860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.617957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.617983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.618139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.618178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.618304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.618345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.618452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.618477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.618609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.618634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.618783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.618811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.618911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.618937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.619094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.619134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.619261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.619286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.619431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.619456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.619606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.619632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.619748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.619774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.619899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.619924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.619970] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:28:25.570 [2024-12-08 06:32:15.620019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.620046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 [2024-12-08 06:32:15.620049] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.620183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.620208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.620319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.620343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.620486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.620510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.620620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.620644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.620799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.620839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.620973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.621017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.621169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.621195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.621318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.621343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.621478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.621504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.621685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.570 [2024-12-08 06:32:15.621710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.570 qpair failed and we were unable to recover it. 00:28:25.570 [2024-12-08 06:32:15.621836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.621863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.621972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.621999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.622159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.622197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.622334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.622374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.622504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.622528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.622691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.622716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.622820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.622850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.622945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.622971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.623098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.623123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.623247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.623272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.623415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.623441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.623576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.623603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.623709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.623756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.623867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.623893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.624056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.624094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.624226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.624264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.624372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.624396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.624497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.624522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.624774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.624802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.624900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.624925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.625029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.625055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.625168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.625193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.625319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.625345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.625447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.625473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.625615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.625640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.625772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.625800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.625925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.625951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.626127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.626152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.626291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.626316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.626422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.626447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.626579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.626605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.626798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.626826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.626925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.626952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.627075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.627117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.627214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.627239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.627415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.627455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.627582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.627607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.627730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.627756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.627870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.627896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.628033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.628057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.628177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.628202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.628338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.628363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.628501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.628526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.628663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.628689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.628814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.628840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.628930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.628956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.629087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.629116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.629223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.629247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.629391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.629416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.629591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.629616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.629760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.629786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.571 qpair failed and we were unable to recover it. 00:28:25.571 [2024-12-08 06:32:15.629888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.571 [2024-12-08 06:32:15.629915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.630048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.630074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.630184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.630224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.630362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.630387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.630538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.630563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.630738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.630764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.630851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.630877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.630968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.630993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.631117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.631143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.631248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.631274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.631385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.631412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.631538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.631563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.631679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.631706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.631851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.631877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.631972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.632011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.632110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.632135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.632271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.632296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.632419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.632445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.632583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.632607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.632751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.632777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.632909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.632935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.633027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.633052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.633183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.633208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.633341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.633381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.633513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.633538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.633654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.633679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.633801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.633827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.633922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.633948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.634103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.634142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.634312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.634336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.634443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.634468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.634547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.634571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.634684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.634710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.634817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.634843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.634966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.634992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.635131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.635174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.635309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.635333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.635454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.635479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.635590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.635617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.635776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.635803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.635902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.635928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.636068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.636092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.636210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.636234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.636379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.636405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.636551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.636576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.636704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.636734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.636825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.636850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.636971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.636997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.637135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.637161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.637311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.637350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.637467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.637491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.637622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.637647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.637774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.637799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.637909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.572 [2024-12-08 06:32:15.637935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.572 qpair failed and we were unable to recover it. 00:28:25.572 [2024-12-08 06:32:15.638046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.638071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.638247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.638272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.638398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.638437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.638602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.638641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.638772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.638798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.638942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.638968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.639109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.639149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.639306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.639331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.639442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.639466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.639589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.639614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.639747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.639774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.639872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.639898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.640034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.640058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.640201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.640240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.640364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.640390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.640516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.640541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.640682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.640728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.640836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.640861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.640959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.640984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.641117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.641142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.641248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.641273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.641389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.641418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.641562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.641587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.641710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.641769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.641889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.641915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.642061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.642085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.642239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.642278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.642391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.642431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.642539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.642563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.642747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.642775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.642874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.642900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.643004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.643028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.643196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.643236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.643343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.643383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.643512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.643537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.643687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.643714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.643815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.643839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.643952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.643978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.644095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.644119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.644227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.644254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.644400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.644424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.644531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.644556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.644670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.644696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.645481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.645511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.645687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.645712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.645858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.645885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.573 [2024-12-08 06:32:15.646009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.573 [2024-12-08 06:32:15.646036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.573 qpair failed and we were unable to recover it. 00:28:25.574 [2024-12-08 06:32:15.646197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.574 [2024-12-08 06:32:15.646222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.574 qpair failed and we were unable to recover it. 00:28:25.574 [2024-12-08 06:32:15.646361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.574 [2024-12-08 06:32:15.646387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.574 qpair failed and we were unable to recover it. 00:28:25.574 [2024-12-08 06:32:15.646527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.574 [2024-12-08 06:32:15.646553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.574 qpair failed and we were unable to recover it. 00:28:25.574 [2024-12-08 06:32:15.646691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.574 [2024-12-08 06:32:15.646737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.574 qpair failed and we were unable to recover it. 00:28:25.574 [2024-12-08 06:32:15.647780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.574 [2024-12-08 06:32:15.647811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.574 qpair failed and we were unable to recover it. 00:28:25.574 [2024-12-08 06:32:15.647938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.574 [2024-12-08 06:32:15.647962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.574 qpair failed and we were unable to recover it. 00:28:25.574 [2024-12-08 06:32:15.648089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.574 [2024-12-08 06:32:15.648114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.574 qpair failed and we were unable to recover it. 00:28:25.574 [2024-12-08 06:32:15.648254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.648279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.648416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.648443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.648555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.648580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.648736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.648763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.648883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.648908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.649061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.649086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.649876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.649907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.650053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.650084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.650226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.650252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.650391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.650417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.650527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.650553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.650698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.650745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.650869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.650896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.651000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.651026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.651159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.651185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.651290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.651316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.651448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.651473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.651613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.651638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.651780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.651807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.651931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.651957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.652062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.652088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.652222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.652248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.652409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.652450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.652556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.652582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.652747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.652773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.652886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.652911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.653051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.653078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.653238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.653279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.653442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.653467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.653598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.653623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.653744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.653770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.653873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.653900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.856 qpair failed and we were unable to recover it. 00:28:25.856 [2024-12-08 06:32:15.654034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.856 [2024-12-08 06:32:15.654059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.654220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.654259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.654364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.654390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.654524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.654549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.654695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.654735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.654862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.654888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.655028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.655053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.655176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.655201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.655344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.655369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.655481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.655506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.655676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.655702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.655830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.655856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.655997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.656023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.656170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.656196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.656360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.656399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.656541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.656570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.656756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.656784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.656879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.656905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.657060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.657087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.657250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.657275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.657390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.657415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.657550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.657576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.657713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.657759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.657860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.657887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.658011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.658038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.658174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.658200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.658341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.658367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.658491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.658517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.658665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.658690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.658820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.658846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.658973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.658998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.659109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.659149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.659271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.659312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.659457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.659484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.659630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.659671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.659778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.659806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.659921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.659947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.857 [2024-12-08 06:32:15.660066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.857 [2024-12-08 06:32:15.660092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.857 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.660192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.660233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.660406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.660432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.660571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.660597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.660764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.660805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.660909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.660939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.661114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.661140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.661290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.661316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.661450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.661475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.661614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.661640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.661772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.661799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.661903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.661929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.662056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.662081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.662226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.662266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.662372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.662397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.662560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.662586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.662689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.662714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.662860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.662885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.663026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.663051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.663178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.663203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.663368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.663393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.663527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.663553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.663691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.663738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.663888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.663914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.664038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.664064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.664179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.664204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.664304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.664329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.664496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.664522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.664633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.664658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.664812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.664839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.664965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.664991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.665126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.665152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.665316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.665350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.665463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.665489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.665613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.665639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.665768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.665795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.665893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.665920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.666050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.666077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.858 qpair failed and we were unable to recover it. 00:28:25.858 [2024-12-08 06:32:15.666229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.858 [2024-12-08 06:32:15.666253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.666392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.666418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.666581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.666620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.666744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.666772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.666878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.666904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.667060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.667101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.667227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.667251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.667361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.667386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.667493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.667518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.667640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.667665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.667791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.667818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.667912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.667937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.668094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.668133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.668284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.668309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.668411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.668435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.668581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.668606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.668715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.668749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.668846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.668872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.669018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.669043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.669227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.669252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.669411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.669451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.669581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.669606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.669748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.669775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.669907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.669933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.670040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.670065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.670208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.670248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.670407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.670432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.670544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.670569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.670739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.670765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.671518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.671546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.671731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.671758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.671894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.671921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.672081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.672121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.672258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.672282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.672444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.672485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.672630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.672659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.672786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.672813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.672920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.672947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.859 qpair failed and we were unable to recover it. 00:28:25.859 [2024-12-08 06:32:15.673096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.859 [2024-12-08 06:32:15.673122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.673218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.673244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.673391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.673417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.673534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.673559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.673703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.673752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.673843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.673870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.674008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.674049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.674190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.674215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.674363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.674403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.674543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.674582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.674736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.674763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.674889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.674915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.675051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.675092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.675191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.675231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.675342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.675368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.675516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.675541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.675689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.675737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.675867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.675893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.676022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.676068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.676227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.676252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.676420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.676445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.676577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.676617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.676742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.676768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.676894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.676920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.677033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.677063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.677159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.677185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.677314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.677339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.677457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.677482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.677607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.677633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.677744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.677770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.677875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.677902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.678055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.678095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.678234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.860 [2024-12-08 06:32:15.678258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.860 qpair failed and we were unable to recover it. 00:28:25.860 [2024-12-08 06:32:15.678395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.678421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.678533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.678559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.678694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.678729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.678827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.678853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.678956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.678981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.679134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.679175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.679346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.679370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.679505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.679530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.679643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.679669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.679798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.679840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.679941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.679968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.680121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.680163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.680286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.680327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.680496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.680521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.680628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.680653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.680802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.680831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.680926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.680952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.681099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.681125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.681260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.681301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.681449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.681474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.681610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.681637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.681779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.681807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.681912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.681937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.682093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.682135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.682291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.682316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.682447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.682472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.682578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.682604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.682753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.682779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.682874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.682900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.683025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.683050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.683186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.683213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.683334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.683359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.683504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.683530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.683654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.683679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.683805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.683832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.683930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.683956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.684116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.684156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.861 [2024-12-08 06:32:15.684318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.861 [2024-12-08 06:32:15.684344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.861 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.684527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.684552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.684678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.684719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.684837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.684863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.684959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.684985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.685143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.685169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.685348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.685373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.685535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.685561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.685691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.685742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.685871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.685897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.686055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.686094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.686255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.686280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.686380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.686406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.686570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.686596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.686745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.686772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.686923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.686950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.687092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.687116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.687282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.687322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.687482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.687508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.687608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.687634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.687781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.687808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.687954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.687980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.688110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.688151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.688284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.688324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.688433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.688458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.688567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.688593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.688706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.688758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.688895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.688922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.689045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.689086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.689207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.689247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.689377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.689402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.689542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.689569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.689749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.689776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.689892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.689918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.690053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.690078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.690209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.690239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.690379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.690405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.690566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.690592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.862 [2024-12-08 06:32:15.690765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.862 [2024-12-08 06:32:15.690792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.862 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.690954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.690980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.691108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.691147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.691278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.691318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.691450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.691475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.691584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.691610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.691763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.691789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.691889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.691915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.692033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.692059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.692170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.692195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.692371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.692396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.692551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.692576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.692693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.692719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.692864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.692890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.693019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.693045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.693177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.693202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.693370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.693396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.693538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.693564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.693645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.693670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.693850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.693877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.693970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.694013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.694159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.694199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.694360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.694385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.694520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.694545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.694682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.694707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.694854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.694879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.695031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.695056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.695195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.695235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.695359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.695398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.695514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.695539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.695684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.695732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.695861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.695887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.696046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.696071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.696186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.696212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.696374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.696400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.696506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.696531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.696671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.696696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.863 [2024-12-08 06:32:15.696875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.863 [2024-12-08 06:32:15.696902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.863 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.697055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.697099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.697278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.697302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.697426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.697451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.697578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.697603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.697765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.697793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.697891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.697916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.698068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.698094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.698233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.698273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.698375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.698400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.698544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.698570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.698736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.698762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.698858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.698883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.699033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.699058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.699234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.699259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.699423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.699462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.699611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.699636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.699787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.699814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.699933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.699958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.700116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.700155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.700256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.700281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.700410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.700434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.700573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.700599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.700785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.700841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.700974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.701001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.701117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.701142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.701309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.701350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.701482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.701509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.701640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.701672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.701823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.701849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.701974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.702015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.702142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.702184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.702307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.702334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.702453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.702478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.702588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.702614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.702754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.702780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.702926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.702953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.703113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.703138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.703293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.864 [2024-12-08 06:32:15.703317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.864 qpair failed and we were unable to recover it. 00:28:25.864 [2024-12-08 06:32:15.703453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.703478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.703638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.703679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.703823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.703850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.703961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.703987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.704138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.704180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.704298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.704338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.704441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.704466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.704575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.704600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.704784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.704811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.704917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.704943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.705089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.705114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.705256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.705296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.705427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.705453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.705566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.705591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.705763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.705819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.705975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.706002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.706164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.706195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.706319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.706359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.706512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.706537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.706700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.706747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.706894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.706920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.707036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.707075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.707258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.707282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.707436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.707460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.707589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.707615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.707760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.707786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.707876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.707903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.708060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.708085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.708198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.708237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.708369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.708395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.708548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.708574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.708688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.708713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.708733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:25.865 [2024-12-08 06:32:15.708842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.865 [2024-12-08 06:32:15.708866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.865 qpair failed and we were unable to recover it. 00:28:25.865 [2024-12-08 06:32:15.709025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.709050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.709221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.709246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.709408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.709448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.709593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.709618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.709784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.709811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.709930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.709956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.710134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.710158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.710294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.710318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.710453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.710478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.710622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.710647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.710779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.710806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.710890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.710916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.711037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.711062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.711187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.711227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.711400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.711425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.711563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.711588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.711739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.711766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.711891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.711917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.712063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.712104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.712248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.712272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.712410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.712435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.712577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.712602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.712741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.712768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.712891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.712917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.713096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.713122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.713260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.713284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.713422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.713447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.713580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.713605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.713746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.713773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.713902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.713927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.714053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.714093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.714195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.714234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.714340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.714366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.714501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.714526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.714715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.714746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.714857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.714883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.715018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.715043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.715219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.866 [2024-12-08 06:32:15.715248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.866 qpair failed and we were unable to recover it. 00:28:25.866 [2024-12-08 06:32:15.715355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.715380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.715521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.715545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.715681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.715707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.715882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.715924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.716064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.716092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.716253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.716279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.716401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.716426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.716560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.716584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.716687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.716713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.716858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.716885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.717035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.717060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.717208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.717232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.717366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.717392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.717510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.717536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.717670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.717696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.717831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.717858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.718018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.718043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.718189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.718214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.718337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.718364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.718513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.718538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.718674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.718700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.718800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.718827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.718923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.718949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.719107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.719147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.719287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.719311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.719450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.719476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.719611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.719658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.719762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.719789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.719909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.719936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.720065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.720106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.720210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.720236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.720396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.720422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.720553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.720593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.720715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.720762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.720876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.720902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.721066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.721107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.721275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.721301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.867 [2024-12-08 06:32:15.721408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.867 [2024-12-08 06:32:15.721435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.867 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.721570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.721596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.721747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.721775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.721954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.721980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.722147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.722172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.722301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.722327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.722487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.722513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.722649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.722689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.722807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.722835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.723012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.723037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.723179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.723204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.723302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.723328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.723470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.723496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.723663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.723689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.723780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.723807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.723934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.723960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.724070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.724096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.724257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.724283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.724418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.724444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.724540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.724566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.724677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.724777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.724957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.724985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.725135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.725162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.725311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.725350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.725491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.725517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.725664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.725691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.725825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.725853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.726018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.726044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.726210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.726235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.726325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.726355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.726502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.726528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.726664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.726690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.726782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.726809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.726937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.726963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.727056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.727082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.727225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.727250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.727399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.727425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.727529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.727556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.868 qpair failed and we were unable to recover it. 00:28:25.868 [2024-12-08 06:32:15.727730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.868 [2024-12-08 06:32:15.727772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.727874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.727903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.727999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.728026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.728189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.728214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.728358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.728384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.728523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.728549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.728653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.728678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.728831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.728858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.728981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.729025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.729149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.729189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.729323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.729349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.729492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.729518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.729657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.729682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.729843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.729870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.730037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.730062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.730213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.730237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.730344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.730369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.730504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.730529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.730693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.730745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.730883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.730908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.731099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.731124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.731225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.731266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.731426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.731451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.731590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.731616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.731738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.731764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.731859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.731885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.731982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.732008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.732117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.732143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.732327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.732352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.732521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.732545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.732708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.732757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.732873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.732898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.733049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.733075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.733198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.733223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.733371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.733397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.733531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.733557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.869 qpair failed and we were unable to recover it. 00:28:25.869 [2024-12-08 06:32:15.733711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.869 [2024-12-08 06:32:15.733742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.733831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.733857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.734028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.734053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.734230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.734255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.734416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.734441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.734561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.734587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.734704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.734748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.734897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.734922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.735044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.735084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.735233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.735258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.735395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.735420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.735566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.735591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.735762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.735789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.735917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.735943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.736071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.736097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.736241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.736266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.736404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.736429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.736560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.736586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.736708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.736777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.736892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.736920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.737058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.737085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.737222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.737248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.737384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.737411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.737530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.737557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.737753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.737781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.737910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.737936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.738047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.738072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.738187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.738212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.738376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.738402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.738541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.738581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.738692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.738717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.738807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.738833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.738967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.739009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.739129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.739155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.739319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.739359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.870 [2024-12-08 06:32:15.739460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.870 [2024-12-08 06:32:15.739485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.870 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.739622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.739656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.739819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.739847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.739962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.739989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.740127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.740167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.740300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.740326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.740461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.740487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.740571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.740606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.740770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.740811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.740931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.740957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.741120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.741146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.741262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.741288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.741515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.741555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.741696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.741730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.741862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.741887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.742036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.742062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.742189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.742216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.742374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.742399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.742539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.742565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.742745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.742786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.742911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.742938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.743065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.743091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.743258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.743283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.743501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.743526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.743666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.743692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.743828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.743856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.743988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.744028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.744155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.744194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.744401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.744442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.744611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.744637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.744767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.744795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.744916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.744943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.745067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.745107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.745232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.745272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.745446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.745486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.745612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.745638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.745768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.745796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.871 [2024-12-08 06:32:15.745937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.871 [2024-12-08 06:32:15.745964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.871 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.746071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.746111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.746201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.746227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.746358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.746383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.746551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.746577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.746676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.746702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.746868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.746909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.747078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.747104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.747267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.747293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.747406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.747431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.747584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.747610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.747718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.747753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.747867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.747893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.748021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.748048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.748177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.748217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.748424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.748450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.748586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.748612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.748746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.748773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.748858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.748886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.749050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.749089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.749204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.749232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.749355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.749383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.749508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.749533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.749638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.749664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.749789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.749818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.749931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.749958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.750052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.750078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.750202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.750228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.750371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.750397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.750531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.750558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.750639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.750666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.750792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.750836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.750978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.751006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.751124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.751150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.751277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.751302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.751440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.751465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.751619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.751645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.751754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.751781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.872 qpair failed and we were unable to recover it. 00:28:25.872 [2024-12-08 06:32:15.751914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.872 [2024-12-08 06:32:15.751941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.752035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.752061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.752186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.752212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.752301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.752326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.752456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.752483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.752610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.752637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.752748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.752775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.752877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.752904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.753055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.753096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.753211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.753236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.753376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.753403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.753548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.753575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.753739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.753765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.753881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.753907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.754057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.754097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.754228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.754253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.754409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.754451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.754547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.754574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.754735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.754763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.754884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.754910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.755038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.755081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.755215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.755241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.755377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.755404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.755515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.755540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.755668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.755694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.755823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.755851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.756017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.756044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.756188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.756213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.756351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.756377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.756525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.756551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.756680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.756706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.756837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.756863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.756984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.757011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.757110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.757135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.757272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.757298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.757455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.757481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.757612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.757638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.757770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.873 [2024-12-08 06:32:15.757810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.873 qpair failed and we were unable to recover it. 00:28:25.873 [2024-12-08 06:32:15.757905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.757932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.758029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.758055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.758189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.758215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.758323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.758348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.758508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.758534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.758707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.758755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.758849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.758877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.759019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.759046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.759211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.759251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.759352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.759383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.759519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.759547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.759694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.759727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.759837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.759863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.760016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.760042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.760176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.760217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.760307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.760333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.760461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.760488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.760593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.760633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.760797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.760837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.760962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.760990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.761086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.761112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.761273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.761300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.761388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.761415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.761516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.761544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.761705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.761755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.761909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.761937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.762055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.762082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.762196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.762222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.762368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.762394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.762528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.762556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.762690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.762737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.762872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.762901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.763049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.874 [2024-12-08 06:32:15.763091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.874 qpair failed and we were unable to recover it. 00:28:25.874 [2024-12-08 06:32:15.763252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.763279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.763400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.763428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.763521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.763548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.763663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.763695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.763820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.763847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.763989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.764014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.764156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.764182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.764269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.764295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.764416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.764442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.764551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.764577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.764693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.764719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.764845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.764873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.764994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.765020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.765102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.765129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.765214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.765241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.765352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.765378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.765467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.765494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.765645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.765671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.765821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.765848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.765933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.765959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.766048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.766074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.766163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.766189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.766311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.766337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.766457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.766485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.766604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.766631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.766755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.766782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.766930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.766956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.767077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.767104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.767220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.767247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.767333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.767359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.767510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.767536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.767622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.767649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.767745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.767773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.767859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.767886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.767980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.768005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.768119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.768145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.768292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.768318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.875 qpair failed and we were unable to recover it. 00:28:25.875 [2024-12-08 06:32:15.768458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.875 [2024-12-08 06:32:15.768484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.768571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.768597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.768758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.768799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.768926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.768954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.769076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.769102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.769217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.769244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.769365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.769391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.769481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.769507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.769624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.769651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.769806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.769832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.769930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.769956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.770073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.770099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.770317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.770354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.770479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.770505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.770683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.770710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.770861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.770889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.770989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.771016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.771143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.771169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.771283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.771309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.771473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.771500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.771616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.771643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.771738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.771765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.771891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.771917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.772126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.772151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.772299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.772333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.772420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.772445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.772619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.772645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.772735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.772761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.772884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.772910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.773005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.773032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.773256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.773282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.773401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.773428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.773556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.773583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.773674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.773700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.773834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.773861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.774066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.774091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.774235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.774262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.774405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.876 [2024-12-08 06:32:15.774431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.876 qpair failed and we were unable to recover it. 00:28:25.876 [2024-12-08 06:32:15.774591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.774618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.774771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.774798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.774895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.774933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.775031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.775057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.775186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.775212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.775410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.775436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.775619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.775645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.775837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.775864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.776025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.776050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.776266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.776292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.776437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.776463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.776629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.776655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.776860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.776887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.777029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.777055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.777193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.777219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.777344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.777370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.777491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.777517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.777675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.777716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.777900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.777940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.778139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.778193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.778331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.778358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.778458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.778483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.778509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.877 [2024-12-08 06:32:15.778541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.877 [2024-12-08 06:32:15.778560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.877 [2024-12-08 06:32:15.778574] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.877 [2024-12-08 06:32:15.778584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.877 [2024-12-08 06:32:15.778629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.778654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.778790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.778817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.778924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.778950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.779066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.779092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.779272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.779298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.779394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.779430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.779546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.779572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.779732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.779760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.779885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.779911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.780024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.780054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.780215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.780242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.780411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.780438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.877 [2024-12-08 06:32:15.780437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:25.877 [2024-12-08 06:32:15.780503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:25.877 [2024-12-08 06:32:15.780551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:25.877 [2024-12-08 06:32:15.780554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:25.877 [2024-12-08 06:32:15.780658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.877 [2024-12-08 06:32:15.780683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.877 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.780810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.780835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.780944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.780970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.781090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.781117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.781241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.781267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.781424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.781451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.781578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.781605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.781701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.781734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.781855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.781881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.781966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.781992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.782070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.782096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.782218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.782244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.782450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.782481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.782579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.782605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.782741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.782781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.782894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.782934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.783118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.783159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.783264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.783291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.783411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.783437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.783526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.783552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.783652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.783678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.783807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.783847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.783971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.784000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.784119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.784147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.784319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.784346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.784502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.784528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.784705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.784738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.784913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.784939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.785084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.785110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.785230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.785257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.785385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.785412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.785536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.785562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.785780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.878 [2024-12-08 06:32:15.785807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.878 qpair failed and we were unable to recover it. 00:28:25.878 [2024-12-08 06:32:15.785928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.785954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.786085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.786111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.786259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.786285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.786414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.786442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.786538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.786565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.786710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.786746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.786867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.786894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.786987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.787013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.787130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.787156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.787263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.787290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.787433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.787459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.787600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.787626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.787744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.787771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.787860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.787886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.787983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.788010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.788104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.788131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.788230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.788256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.788374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.788400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.788481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.788507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.788645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.788691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.788841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.788883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.788981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.789010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.789160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.789187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.789282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.789309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.789402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.789428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.789599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.789639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.789754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.789794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.789912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.789950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.790057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.790086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.790229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.790255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.790403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.790430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.790522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.790550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.790671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.790698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.790883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.790924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.791020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.791048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.791258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.879 [2024-12-08 06:32:15.791286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.879 qpair failed and we were unable to recover it. 00:28:25.879 [2024-12-08 06:32:15.791433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.791460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.791607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.791634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.791732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.791759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.791862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.791889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.791979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.792005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.792150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.792176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.792325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.792352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.792471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.792500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.792622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.792649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.792786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.792826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.792964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.792998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.793122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.793148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.793246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.793272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.793402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.793428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.793577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.793604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.793711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.793769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.793874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.793901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.794000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.794026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.794172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.794198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.794315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.794341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.794438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.794464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.794550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.794577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.794683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.794733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.794867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.794897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.794995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.795023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.795140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.795167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.795290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.795318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.795417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.795445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.795592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.795619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.795710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.795742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.795833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.795859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.795970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.795996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.796090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.796116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.796237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.796262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.796378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.796405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.796498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.796524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.796620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.796648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.796747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.796780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.796924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.796952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.797097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.797124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.797250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.797277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.880 [2024-12-08 06:32:15.797401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.880 [2024-12-08 06:32:15.797428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.880 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.797547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.797574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.797768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.797794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.797886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.797912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.798006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.798032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.798174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.798200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.798407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.798434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.798525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.798551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.798670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.798696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.798826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.798868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.799002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.799030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.799156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.799183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.799332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.799359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.799482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.799509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.799602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.799629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.799760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.799788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.799901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.799926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.800015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.800041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.800186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.800212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.800331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.800357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.800477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.800503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.800586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.800614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.800789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.800830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.800932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.800966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.801089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.801116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.801262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.801289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.801377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.801404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.801498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.801525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.801641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.801667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.801754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.801781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.801900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.801926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.802025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.802051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.802170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.802195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.802315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.802341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.802437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.802463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.802608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.802634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.802752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.802781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.802906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.802933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.803050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.803077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.803198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.803225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.803318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.803344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.803459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.803486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.803576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.803604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.803695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.803729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.803847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.803873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.881 [2024-12-08 06:32:15.804018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.881 [2024-12-08 06:32:15.804044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.881 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.804163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.804189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.804282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.804308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.804396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.804424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.804557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.804598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.804738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.804780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.804911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.804938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.805088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.805114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.805228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.805254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.805373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.805399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.805520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.805548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.805714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.805769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.805921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.805950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.806051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.806079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.806179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.806206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.806322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.806349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.806463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.806491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.806578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.806604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.806727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.806768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.806881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.806909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.807004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.807031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.807150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.807176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.807324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.807350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.807445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.807471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.807617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.807644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.807786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.807828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.807936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.807977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.808108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.808137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.808282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.808308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.808428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.808455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.808594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.808634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.808785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.808826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.808932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.808960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.809080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.809107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.809222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.809248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.809366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.809392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.809502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.809543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.809655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.809695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.809844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.809885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.810006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.810035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.810128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.810155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.810279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.810307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.882 [2024-12-08 06:32:15.810454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.882 [2024-12-08 06:32:15.810481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.882 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.810627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.810667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.810787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.810827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.810919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.810951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.811075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.811102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.811224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.811250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.811369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.811395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.811543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.811572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.811709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.811759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.811894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.811934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.812034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.812062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.812181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.812207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.812328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.812355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.812501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.812528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.812660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.812701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.812875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.812916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.813040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.813068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.813222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.813248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.813393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.813419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.813508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.813534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.813694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.813742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.813908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.813949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.814054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.814084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.814174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.814202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.814348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.814376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.814522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.814549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.814667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.814695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.814841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.814882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.814987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.815015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.815136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.815163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.815283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.815316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.815439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.815465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.815585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.815612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.815746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.815787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.815929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.815970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.816094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.816122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.816271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.816297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.816415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.816442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.883 qpair failed and we were unable to recover it. 00:28:25.883 [2024-12-08 06:32:15.816565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.883 [2024-12-08 06:32:15.816592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.816737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.816764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.816880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.816907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.817021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.817048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.817166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.817194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.817313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.817340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.817488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.817515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.817634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.817660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.817781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.817822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.817968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.818008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.818104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.818131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.818255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.818281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.818362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.818389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.818503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.818529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.818648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.818675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.818829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.818869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.818969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.818997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.819114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.819142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.819264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.819292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.819446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.819474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.819590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.819617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.819708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.819743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.819847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.819874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.819996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.820023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.820117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.820144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.820263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.820290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.820436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.820463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.820586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.820613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.820734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.820762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.820907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.820933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.821058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.821085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.821204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.821231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.821323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.821360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.821446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.821473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.821556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.821583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.821705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.821740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.821834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.821861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.821982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.822009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.822099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.822126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.822245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.822272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.822381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.822407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.884 [2024-12-08 06:32:15.822527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.884 [2024-12-08 06:32:15.822553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.884 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.822671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.822698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.822797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.822825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.822939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.822966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.823060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.823086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.823207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.823233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.823348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.823374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.823470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.823497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.823616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.823643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.823765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.823792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.823899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.823926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.824026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.824052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.824202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.824228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.824311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.824338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.824458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.824484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.824635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.824662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.824756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.824784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.824874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.824901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.825024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.825051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.825166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.825193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.825280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.825306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.825427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.825453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.825545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.825572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.825717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.825749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.825869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.825895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.825983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.826010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.826156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.826183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.826301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.826328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.826415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.826441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.826553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.826580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.826672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.826698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.826850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.826899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.827009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.827039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.827183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.827212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.827302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.827330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.827478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.827506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.827597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.827624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.827752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.827780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.827876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.827904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.827991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.828018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.828133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.828160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.828278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.885 [2024-12-08 06:32:15.828305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.885 qpair failed and we were unable to recover it. 00:28:25.885 [2024-12-08 06:32:15.828421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.828448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.828575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.828603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.828719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.828754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.828899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.828939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.829081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.829121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.829249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.829277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.829435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.829464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.829620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.829647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.829744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.829773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.829896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.829923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.830042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.830072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.830227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.830254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.830376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.830403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.830491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.830520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.830616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.830642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.831303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.831334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.831457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.831493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.831611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.831637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.831771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.831798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.831948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.831974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.832116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.832143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.832261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.832287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.832429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.832455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.832573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.832600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.833061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.833091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.833238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.833264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.833383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.833410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.833551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.833578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.833693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.833742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.833874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.833916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.834064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.834093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.834210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.834238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.834369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.834396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.834516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.834543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.834659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.834686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.834796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.834825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.834952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.834979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.835130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.835157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.835277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.835304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.835415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.835442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.835529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.835556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.835698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.835737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.886 [2024-12-08 06:32:15.835859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.886 [2024-12-08 06:32:15.835886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.886 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.836020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.836061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.836189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.836217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.836345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.836372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.836457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.836484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.836576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.836603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.836729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.836757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.836900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.836927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.837003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.837030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.837155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.837182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.837304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.837331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.837473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.837499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.837584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.837611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.837732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.837760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.837889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.837921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.838012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.838039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.838127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.838154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.838298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.838325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.838416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.838442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.838569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.838596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.838714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.838773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.838919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.838946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.839064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.839090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.839201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.839228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.839352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.839379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.839527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.839553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.839674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.839700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.839802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.839829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.839950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.839977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.840073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.840101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.840229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.840256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.840377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.840403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.840551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.840578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.840688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.840714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.840850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.840877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.840961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.840987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.841071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.841098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.841213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.841252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.841383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.887 [2024-12-08 06:32:15.841410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.887 qpair failed and we were unable to recover it. 00:28:25.887 [2024-12-08 06:32:15.841551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.841577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.841689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.841716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.841919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.841959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.842153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.842181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.842643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.842676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.842783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.842811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.842984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.843014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.843220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.843249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.843380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.843408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.843528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.843555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.843662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.843700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.843828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.843856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.843982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.844008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.844134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.844161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.844304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.844331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.844419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.844445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.844543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.844569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.844666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.844698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.844857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.844887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.845312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.845341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.845489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.845516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.845671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.845698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.845856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.845884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.846079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.846106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.846243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.846270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.846436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.846475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.846602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.846629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.846782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.846809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.846952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.846978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.847111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.847151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.847268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.847299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.847455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.847483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.847638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.847665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.847853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.847889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.848066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.848106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.848260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.848289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.848411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.848438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.848560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.848587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.848705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.848739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.848840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.848867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.849003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.849030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.849122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.849148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.888 [2024-12-08 06:32:15.849275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.888 [2024-12-08 06:32:15.849302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.888 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.849394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.849429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.849582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.849609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.849759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.849787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.849979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.850005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.850182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.850209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.850327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.850354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.850531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.850557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.850706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.850740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.850825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.850852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.850961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.850988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.851139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.851165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.851257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.851283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.851433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.851460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.851555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.851581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.851702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.851737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.851852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.851879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.851967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.851993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.852079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.852106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.852220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.852247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.852364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.852390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.852510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.852536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.852683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.852710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.852832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.852859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.853006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.853032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.853188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.853214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.853324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.853351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.853500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.853532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.853625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.853651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.853735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.853762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.853881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.853908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.854053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.854079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.854191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.854218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.854337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.854363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.854481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.854508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.854600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.854626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.854719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.854752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.854897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.854924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.855070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.855097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.855179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.855206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.855293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.855319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.855473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.889 [2024-12-08 06:32:15.855500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.889 qpair failed and we were unable to recover it. 00:28:25.889 [2024-12-08 06:32:15.855615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.855641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.855771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.855799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.855883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.855910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.856030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.856057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.856183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.856210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.856391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.856418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.856536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.856563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.856684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.856710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.856913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.856940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.857061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.857088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.857171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.857198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.857317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.857343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.857468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.857494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.857585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.857611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.857734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.857761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.857852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.857879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.858063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.858089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.858250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.858277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.858381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.858407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.858554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.858598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.859048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.859080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.859236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.859265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.859426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.859452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.859560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.859587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.859743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.859770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.859880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.859913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.860067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.860094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.860182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.860209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.860305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.860339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.860560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.860587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.860740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.860767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.860939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.860965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.861094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.861120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.861247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.861273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.861393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.861420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.861545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.861572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.861700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.861734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.861879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.861906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.862007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.862033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.862168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.862194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.862342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.862369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.862494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.862520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.862665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.890 [2024-12-08 06:32:15.862691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.890 qpair failed and we were unable to recover it. 00:28:25.890 [2024-12-08 06:32:15.862828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.862854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.862949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.862976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.863123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.863149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.863241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.863267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.863410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.863437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.863558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.863585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.863707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.863742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.863843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.863870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.863961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.863987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.864077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.864103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.864228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.864254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.864365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.864392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.864511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.864538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.864630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.864658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.864782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.864810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.864957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.864984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.865095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.865122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.865215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.865241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.865336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.865362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.865482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.865516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.865591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.865617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.865779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.865806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.865915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.865949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.866057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.866083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.866177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.866203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.866352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.866379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.866510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.866537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.866654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.866681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.866808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.866835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.866958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.866984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.867175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.867202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.867318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.867345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.867463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.867488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.867639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.867666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.867781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.867808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.867908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.867935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.868045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.891 [2024-12-08 06:32:15.868072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.891 qpair failed and we were unable to recover it. 00:28:25.891 [2024-12-08 06:32:15.868154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.868181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.868307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.868344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.868581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.868618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.868788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.868816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.868928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.868955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.869045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.869072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.869205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.869231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.869315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.869342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.869436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.869462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.869578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.869604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.869728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.869755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.869864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.869890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.870009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.870035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.870152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.870179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.870307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.870334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.870477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.870503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.870619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.870645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.870770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.870797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.870917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.870943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.871060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.871086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.871176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.871202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.871305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.871331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.871453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.871479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.871602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.871629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.871775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.871802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.871884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.871915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.872006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.872032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.872144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.872171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.872285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.872311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.872459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.872486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.872595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.872632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.872747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.872775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.872935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.872974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.873171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.873208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.873607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.873637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.873794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.873823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.873916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.873943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.874089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.874115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.874236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.874262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.874374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.874400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.874552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.874578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.892 [2024-12-08 06:32:15.874704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.892 [2024-12-08 06:32:15.874737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.892 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.874884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.874911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.875038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.875064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.875215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.875241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.875359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.875386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.875511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.875536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.875694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.875726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.875819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.875846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.875991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.876018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.876228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.876254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.876374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.876400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.876609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.876658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.876809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.876838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.876969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.876995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.877192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.877219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.877356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.877383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.877488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.877523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.877610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.877636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.877761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.877789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.877917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.877944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.878055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.878082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.878171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.878205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.878513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.878541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.878654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.878680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.878833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.878860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.878988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.879014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.879134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.879161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.879239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.879265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.879378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.879404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.879527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.879553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.879642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.879668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.879811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.879851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.879993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.880042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.880186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.880215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.880340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.880367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.880454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.880481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.880574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.880601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.880686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.880713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.880850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.880877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.880997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.881024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.881136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.881162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.881256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.881283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.893 [2024-12-08 06:32:15.881472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.893 [2024-12-08 06:32:15.881499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.893 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.881610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.881643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.881748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.881775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.881926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.881952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.882068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.882094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.882191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.882217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.882324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.882349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.882482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.882521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.882683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.882710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.882836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.882867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.882961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.882987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.883129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.883156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.883255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.883282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.883403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.883440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.883559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.883599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.883740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.883769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.883855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.883881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.884045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.884072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.884167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.884194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.884317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.884343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.884457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.884483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.884603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.884629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.884768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.884796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.885010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.885038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.885127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.885154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.885313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.885342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.885468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.885494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.885630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.885657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.885747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.885774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.885864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.885891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.886006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.886035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.886185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.886212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.886332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.886360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.886531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.886557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.886743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.886770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.886863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.886892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.887032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.887073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.887182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.887211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.887370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.887398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.887513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.887541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.887661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.887688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.887847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.887875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.894 [2024-12-08 06:32:15.888016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.894 [2024-12-08 06:32:15.888044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.894 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.888187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.888214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.888336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.888363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.888480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.888507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.888624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.888651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.888789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.888830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.888959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.888988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.889140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.889165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.889316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.889342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.889460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.889486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.889642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.889668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.889789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.889817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.889911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.889938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.890060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.890087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.890233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.890260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.890376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.890403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.890492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.890519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.890662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.890689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.890829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.890870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.890998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.891026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.891147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.891176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.891305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.891331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.891462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.891488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.891583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.891609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.891736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.891765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.891923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.891950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.892096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.892124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.892271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.892299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.892412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.892439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.892554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.892581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.892691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.892718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.892828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.892855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.892981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.893007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.893130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.893157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.893284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.893317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.893439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.893466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.893601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.893629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.893793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.893833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.895 [2024-12-08 06:32:15.893975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.895 [2024-12-08 06:32:15.894014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.895 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.894116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.894144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.894286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.894313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.894470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.894496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.894590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.894617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.894706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.894744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.894867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.894894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.894996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.895026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.895176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.895203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.895344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.895370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.895466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.895493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.895637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.895664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.895790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.895818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.895939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.895965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.896056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.896082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.896209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.896235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.896362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.896390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.896530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.896570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.896712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.896759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.896889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.896917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.897037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.897064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.897166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.897193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.897278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.897305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.897450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.897477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.897563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.897593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.897728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.897757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.897877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.897904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.897996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.898022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.898166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.898192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.896 [2024-12-08 06:32:15.898311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.898337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.896 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.898451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.898477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:25.896 [2024-12-08 06:32:15.898625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:25.896 [2024-12-08 06:32:15.898653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.898778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.896 [2024-12-08 06:32:15.898806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.898904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.898931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.899052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.899087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.899212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.899240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.899385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.899411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.899501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.899528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.899651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.899677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.896 [2024-12-08 06:32:15.899802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.896 [2024-12-08 06:32:15.899830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.896 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.899976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.900002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.900121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.900147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.900309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.900336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.900431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.900459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.900601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.900627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.900753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.900782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.900944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.900985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.901116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.901145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.901296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.901323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.901436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.901463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.901585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.901611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.901730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.901758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.901853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.901879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.901973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.901999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.902144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.902170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.902302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.902329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.902449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.902476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.902569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.902596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.902717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.902751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.902899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.902925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.903045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.903071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.903191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.903222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.903349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.903375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.903492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.903517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.903669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.903698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.903806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.903833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.903920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.903949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.904049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.904076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.904155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.904181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.904328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.904356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.904476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.904504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.904633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.904661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.904771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.904798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.904883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.904909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.905007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.905034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.905124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.905151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.905250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.905276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.905426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.905453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.905567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.905594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.905718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.905756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.905887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.905913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.906035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.906062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.906178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.897 [2024-12-08 06:32:15.906207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.897 qpair failed and we were unable to recover it. 00:28:25.897 [2024-12-08 06:32:15.906301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.906327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.906455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.906481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.906597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.906625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.906719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.906757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.906876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.906902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.907002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.907033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.907128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.907155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.907301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.907327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.907449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.907475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.907594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.907620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.907714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.907749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.907840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.907866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.907958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.907985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.908099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.908126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.908216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.908243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.908325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.908352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.908469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.908495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.908589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.908616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.908719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.908754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.908852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.908879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.908997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.909024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.909145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.909172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.909288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.909315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.909420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.909461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.909622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.909649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.909737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.909766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.909886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.909912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.910035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.910063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.910165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.910194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.910313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.910341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.910463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.910490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.910640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.910667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.910765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.910795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.910891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.910918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.911051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.911078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.911198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.911225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.911371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.911397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.911522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.911549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.911669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.911696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.911809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.911836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.911929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.911955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.912049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.912076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.912200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.912227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.898 qpair failed and we were unable to recover it. 00:28:25.898 [2024-12-08 06:32:15.912370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.898 [2024-12-08 06:32:15.912396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.912481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.912507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.912633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.912659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.912762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.912789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.912884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.912911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.912996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.913023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.913141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.913167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.913285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.913312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.913429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.913455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.913540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.913567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.913675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.913702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.913804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.913832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.913952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.913979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.914129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.914156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.914299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.914326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.914441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.914467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.914596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.914623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.914705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.914741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.914846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.914873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.914968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.914995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.915088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.915115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.915229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.915255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.915403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.915429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.915545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.915572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.915713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.915749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.915840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.915867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.915976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.916002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.916116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.916142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.916304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.916346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.916476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.916512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.916635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.916662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.916748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.916776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.916873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.916900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.916993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.917021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.899 [2024-12-08 06:32:15.917152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.917181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:25.899 [2024-12-08 06:32:15.917298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.917325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.899 [2024-12-08 06:32:15.917417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.917446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.899 [2024-12-08 06:32:15.917589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.899 [2024-12-08 06:32:15.917617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.899 qpair failed and we were unable to recover it. 00:28:25.899 [2024-12-08 06:32:15.917713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.917746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.917848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.917875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.917970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.917997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.918088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.918115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.918225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.918251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.918346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.918374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.918470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.918496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.918642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.918668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.918780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.918810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.918930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.918956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.919047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.919073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.919221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.919249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.919370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.919397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.919516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.919543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.919656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.919683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.919786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.919814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.919915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.919942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.920093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.920119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.920208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.920234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.920353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.920379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.920503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.920533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.920682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.920708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.920822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.920849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.920938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.920964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.921077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.921103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.900 qpair failed and we were unable to recover it. 00:28:25.900 [2024-12-08 06:32:15.921195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.900 [2024-12-08 06:32:15.921221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.921354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.921381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.921465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.921492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.921587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.921614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.921703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.921735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.921867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.921894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.921987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.922013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.922097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.922124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.922269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.922295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.922418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.922445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.922542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.922570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.924801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.924842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.924954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.924980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.925101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.925129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.925247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.925274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.925421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.925448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.925609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.925649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.925785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.925814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.925919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.925946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.926038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.926065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.926224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.926250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.926373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.926399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.926490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.926517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.926644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.926671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.926777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.926804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.926905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.926932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.927046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.927073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.927181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.927210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.927326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.927352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.927470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.927496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.927612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.927639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.927773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.927801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.927903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.927930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.928079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.928105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.928218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.928245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.928391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.928418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.901 qpair failed and we were unable to recover it. 00:28:25.901 [2024-12-08 06:32:15.928536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.901 [2024-12-08 06:32:15.928563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.928685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.928712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.928828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.928855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.928942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.928968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.929061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.929088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.929209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.929235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.929332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.929359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.929475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.929501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.929614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.929640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.929762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.929790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.929894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.929922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.930014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.930040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.930136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.930162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.930308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.930334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.930483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.930510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.930630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.930656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.930767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.930795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.930896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.930923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.931036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.931062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.931209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.931235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.931315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.931341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.931426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.931452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.931546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.931578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.931697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.931730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.931859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.931885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.931973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.931999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.932099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.932126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.932263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.932289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.932450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.932476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.932687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.932713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.932838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.932866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.932993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.933021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.933116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.933143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.933333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.933360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.933452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.933486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.933583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.933610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.933751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.902 [2024-12-08 06:32:15.933778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.902 qpair failed and we were unable to recover it. 00:28:25.902 [2024-12-08 06:32:15.933875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.933901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.934047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.934073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.934190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.934217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.934372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.934399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.934501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.934530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.934650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.934676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.934777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.934804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.934892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.934918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.935031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.935056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.935148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.935176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.935284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.935311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.935420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.935446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.935562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.935593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.935691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.935717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.935812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.935838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.935919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.935945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.936063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.936090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.936176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.936202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.936323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.936353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.936551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.936577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.936710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.936743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.936839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.936865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.936952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.936978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.937179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.937205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.937336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.937363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.937539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.937565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.937686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.937713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.937821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.937847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.937934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.937961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.938075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.938102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.938314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.938352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.938519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.938547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.938713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.938750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.938883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.938910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.939040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.939068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.939217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.903 [2024-12-08 06:32:15.939243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.903 qpair failed and we were unable to recover it. 00:28:25.903 [2024-12-08 06:32:15.939409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.939436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.939596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.939623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.939710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.939745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.939838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.939869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.939961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.939988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.940118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.940144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.940277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.940303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.940435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.940462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.940615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.940641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.940763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.940792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.940953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.940994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.941140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.941170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.941334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.941361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.941516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.941543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.941662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.941692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.941824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.941852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.941949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.941975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.942136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.942162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.942289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.942316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.942461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.942487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.942605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.942631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.942718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.942755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.942874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.942901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.942989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.943015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.943133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.943159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.943277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.943304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.943450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.943476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.943596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.943622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.943751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.943792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.943886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.943914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.944030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.944057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.944209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.944236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.944386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.944412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.944524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.944550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.944735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.944764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.904 [2024-12-08 06:32:15.944879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.904 [2024-12-08 06:32:15.944905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.904 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.945018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.945044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.945168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.945195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.945371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.945397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.945514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.945541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.945653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.945680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.945775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.945801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.945919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.945945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.946068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.946099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.946215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.946241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.946341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.946368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.946449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.946476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.946630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.946656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.946755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.946782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.946864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.946890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.946970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.946997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.947142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.947168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.947266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.947293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.947387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.947413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.947527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.947554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.947644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.947670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.947764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.947791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.947876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.947903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.948014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.948041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.948151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.948177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.948267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.948293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.948451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.948477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.948601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.948627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.948778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.948805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.948891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.948917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.949013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.949040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.949159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.949186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.949304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.949330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.905 [2024-12-08 06:32:15.949423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.905 [2024-12-08 06:32:15.949449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.905 qpair failed and we were unable to recover it. 00:28:25.906 [2024-12-08 06:32:15.949583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.906 [2024-12-08 06:32:15.949625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:25.906 qpair failed and we were unable to recover it. 00:28:25.906 [2024-12-08 06:32:15.949762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.906 [2024-12-08 06:32:15.949803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.906 qpair failed and we were unable to recover it. 00:28:25.906 [2024-12-08 06:32:15.949907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.906 [2024-12-08 06:32:15.949935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.906 qpair failed and we were unable to recover it. 00:28:25.906 [2024-12-08 06:32:15.950140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.906 [2024-12-08 06:32:15.950167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.906 qpair failed and we were unable to recover it. 00:28:25.906 [2024-12-08 06:32:15.950289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.906 [2024-12-08 06:32:15.950316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.906 qpair failed and we were unable to recover it. 00:28:25.906 [2024-12-08 06:32:15.950459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.906 [2024-12-08 06:32:15.950486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:25.906 qpair failed and we were unable to recover it. 00:28:25.906 [2024-12-08 06:32:15.950617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.906 [2024-12-08 06:32:15.950645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.906 qpair failed and we were unable to recover it. 00:28:25.906 [2024-12-08 06:32:15.950742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.906 [2024-12-08 06:32:15.950769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.906 qpair failed and we were unable to recover it. 00:28:25.906 [2024-12-08 06:32:15.950884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.906 [2024-12-08 06:32:15.950911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.906 qpair failed and we were unable to recover it. 00:28:25.906 [2024-12-08 06:32:15.951003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.906 [2024-12-08 06:32:15.951030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:25.906 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.951118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.951145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.951290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.951317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.951437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.951464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.951548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.951575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.951728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.951755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.951885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.951912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.952059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.952085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.952184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.952213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.952309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.952335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.952430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.952456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.952548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.952574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.952692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.952728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.952822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.952848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.952934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.952960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.953055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.953084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.953198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.953225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.953345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.953372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.953470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.953498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.953605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.953646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.953754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.953781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.953897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.953924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.954045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.954075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.954226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.954254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.954377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.954405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.954499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.954526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.954644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.954670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.954794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.954821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.954940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.954966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.955082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.955108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.955251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.955277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.955396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.955421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.955564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.955590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.955685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.955712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.955816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.955845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.955962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.955988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.956135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.956161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.956301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.956327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.956418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.956444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.956530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.171 [2024-12-08 06:32:15.956556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.171 qpair failed and we were unable to recover it. 00:28:26.171 [2024-12-08 06:32:15.956673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.956700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.956809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.956836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.956954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.956980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.957076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.957102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.957219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.957245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.957334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.957360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.957472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.957513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 Malloc0 00:28:26.172 [2024-12-08 06:32:15.957642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.957670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.957819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.957847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.957938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.957965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.172 [2024-12-08 06:32:15.958086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.958113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:26.172 [2024-12-08 06:32:15.958233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.958260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.172 [2024-12-08 06:32:15.958381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.172 [2024-12-08 06:32:15.958408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.958529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.958556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.958676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.958703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.958804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.958829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.958947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.958973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.959084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.959110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.959238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.959264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.959356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.959382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.959495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.959521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.959640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.959666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.959752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.959777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.959870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.959894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.960012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.960038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.960158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.960184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.960275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.960299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.960399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.960425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.960570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.960596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.960730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.960757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.960905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.960931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.961048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.961090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.961229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.961264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.961300] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.172 [2024-12-08 06:32:15.961356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.961389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.961512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.961543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.961671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.961698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7548000b90 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.961823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.961849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.961969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.961995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.962142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.962168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.962282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.962309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.962425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.962451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.172 qpair failed and we were unable to recover it. 00:28:26.172 [2024-12-08 06:32:15.962570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-12-08 06:32:15.962596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.962689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.962713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.962875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.962901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.963015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.963045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.963163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.963189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.963333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.963360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.963474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.963500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.963650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.963676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.963796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.963823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.963945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.963971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.964061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.964085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.964195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.964222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.964366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.964391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.964486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.964511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.964647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.964687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.964809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.964850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.964976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.965005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.965133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.965160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.965257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.965289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.965437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.965464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.965608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.965635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.965744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.965785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.965935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.965964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.966109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.966136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.966243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.966270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.966401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.966428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.966574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.966600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.966702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.966752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.966885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.966914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.967051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.967091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.967220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.967253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.967343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.967368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.967460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.967485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.967587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.967628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.967739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.967766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.967890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.967918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.968030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.968057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.968208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.968235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.968354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.968381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.968507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.968535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.968653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.968678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.968787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.968828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.968980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.969009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.969155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.969183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.969305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-12-08 06:32:15.969333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.173 qpair failed and we were unable to recover it. 00:28:26.173 [2024-12-08 06:32:15.969452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.969479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.174 [2024-12-08 06:32:15.969569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.969594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:26.174 [2024-12-08 06:32:15.969691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.969716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.174 [2024-12-08 06:32:15.969875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.969903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.174 [2024-12-08 06:32:15.970045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.970073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.970166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.970192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.970313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.970340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.970461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.970488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.970677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.970717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.970837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.970877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.971037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.971070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.971223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.971251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.971344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.971369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.971504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.971531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.971643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.971669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.971783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.971811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.971903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.971928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.972046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.972073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.972168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.972193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.972307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.972333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.972447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.972474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.972610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.972650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.972777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.972806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.972895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.972921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.973070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.973096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.973188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.973214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.973360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.973386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.973476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.973502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.973624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.973650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.973757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.973798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.973960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.973988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.974129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.974168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.974321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.974348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.974509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.974535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.974716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.974750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.974895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.974922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.975014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.975040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.975145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.975182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.975314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.975340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.975457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.975483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.975593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-12-08 06:32:15.975620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.174 qpair failed and we were unable to recover it. 00:28:26.174 [2024-12-08 06:32:15.975747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.975789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.975958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.975999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.976166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.976194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.976306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.976334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.976459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.976487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.976583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.976610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.976737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.976765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.976886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.976912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.977060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.977086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.977225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.977256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.977404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.977430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.977548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.977574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.175 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.977712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:26.175 [2024-12-08 06:32:15.977747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.977842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.175 [2024-12-08 06:32:15.977870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.175 [2024-12-08 06:32:15.977991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.978019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.978140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.978166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.978291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.978318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.978442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.978469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.978586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.978613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.978706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.978779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.978926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.978954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.979095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.979135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.979305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.979342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.979476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.979503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.979618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.979644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.979855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.979883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.979983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.980009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.980149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.980175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.980307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.980335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.980428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.980455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.980574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.980601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.980727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.980755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.980900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.980927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.981044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.981071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.981191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.981224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.981345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.981371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.981487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.981514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.981607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-12-08 06:32:15.981633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.175 qpair failed and we were unable to recover it. 00:28:26.175 [2024-12-08 06:32:15.981718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.981751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.981876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.981902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.981995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.982021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.982173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.982199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.982321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.982347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.982463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.982491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.982614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.982641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.982749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.982790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.982991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.983019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.983139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.983165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.983258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.983285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca45d0 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.983373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.983401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.983547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.983573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.983685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.983747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.983903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.983932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.984055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.984083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.984177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.984203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.984324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.984351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.984455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.984483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.984590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.984631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.984850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.984880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.984975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.985002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.985144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.985170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.985309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.985336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.985521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.985557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.176 [2024-12-08 06:32:15.985662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.985701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:26.176 [2024-12-08 06:32:15.985875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.176 [2024-12-08 06:32:15.985903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.176 [2024-12-08 06:32:15.986055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.986082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.986213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.986240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.986390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.986417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.986525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.986552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.986690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.986747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.986890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.986919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.987064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.987091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.987248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.987281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.987399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.987426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.987529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.987555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.987690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.987719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.987819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.987846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.987996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.988022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.988132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.988159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.988279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.988306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.988424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.988451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f753c000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.988620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-12-08 06:32:15.988648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.176 qpair failed and we were unable to recover it. 00:28:26.176 [2024-12-08 06:32:15.988769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.177 [2024-12-08 06:32:15.988796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.177 qpair failed and we were unable to recover it. 00:28:26.177 [2024-12-08 06:32:15.988930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.177 [2024-12-08 06:32:15.988957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.177 qpair failed and we were unable to recover it. 00:28:26.177 [2024-12-08 06:32:15.989086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.177 [2024-12-08 06:32:15.989113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.177 qpair failed and we were unable to recover it. 00:28:26.177 [2024-12-08 06:32:15.989254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.177 [2024-12-08 06:32:15.989281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.177 qpair failed and we were unable to recover it. 00:28:26.177 [2024-12-08 06:32:15.989372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.177 [2024-12-08 06:32:15.989399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7540000b90 with addr=10.0.0.2, port=4420 00:28:26.177 qpair failed and we were unable to recover it. 00:28:26.177 [2024-12-08 06:32:15.989590] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.177 [2024-12-08 06:32:15.992176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.177 [2024-12-08 06:32:15.992294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.177 [2024-12-08 06:32:15.992322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.177 [2024-12-08 06:32:15.992338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.177 [2024-12-08 06:32:15.992351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.177 [2024-12-08 06:32:15.992384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.177 qpair failed and we were unable to recover it. 00:28:26.177 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.177 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:26.177 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.177 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.177 06:32:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.177 06:32:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1180128 00:28:26.177 [2024-12-08 06:32:16.002027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.177 [2024-12-08 06:32:16.002126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.177 [2024-12-08 06:32:16.002166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.177 [2024-12-08 06:32:16.002182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.177 [2024-12-08 06:32:16.002194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.177 [2024-12-08 06:32:16.002234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.177 qpair failed and we were unable to recover it. 00:28:26.177 [2024-12-08 06:32:16.012039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.177 [2024-12-08 06:32:16.012130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.177 [2024-12-08 06:32:16.012154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.177 [2024-12-08 06:32:16.012169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.177 [2024-12-08 06:32:16.012181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.177 [2024-12-08 06:32:16.012210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.177 qpair failed and we were unable to recover it. 00:28:26.177 [2024-12-08 06:32:16.022036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.177 [2024-12-08 06:32:16.022144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.177 [2024-12-08 06:32:16.022169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.177 [2024-12-08 06:32:16.022184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.177 [2024-12-08 06:32:16.022196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.177 [2024-12-08 06:32:16.022226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.177 qpair failed and we were unable to recover it. 00:28:26.177 [2024-12-08 06:32:16.031942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.177 [2024-12-08 06:32:16.032036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.177 [2024-12-08 06:32:16.032061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.177 [2024-12-08 06:32:16.032076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.177 [2024-12-08 06:32:16.032089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.177 [2024-12-08 06:32:16.032119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.177 qpair failed and we were unable to recover it. 00:28:26.177 [2024-12-08 06:32:16.041983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.177 [2024-12-08 06:32:16.042082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.177 [2024-12-08 06:32:16.042107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.177 [2024-12-08 06:32:16.042120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.177 [2024-12-08 06:32:16.042133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.177 [2024-12-08 06:32:16.042163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.177 qpair failed and we were unable to recover it. 00:28:26.177 [2024-12-08 06:32:16.051998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.177 [2024-12-08 06:32:16.052098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.177 [2024-12-08 06:32:16.052124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.177 [2024-12-08 06:32:16.052140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.177 [2024-12-08 06:32:16.052152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.177 [2024-12-08 06:32:16.052182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.177 qpair failed and we were unable to recover it. 00:28:26.177 [2024-12-08 06:32:16.062095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.177 [2024-12-08 06:32:16.062192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.177 [2024-12-08 06:32:16.062223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.177 [2024-12-08 06:32:16.062238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.177 [2024-12-08 06:32:16.062251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.177 [2024-12-08 06:32:16.062291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.177 qpair failed and we were unable to recover it. 00:28:26.177 [2024-12-08 06:32:16.072124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.177 [2024-12-08 06:32:16.072219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.177 [2024-12-08 06:32:16.072260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.177 [2024-12-08 06:32:16.072274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.177 [2024-12-08 06:32:16.072287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.177 [2024-12-08 06:32:16.072317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.177 qpair failed and we were unable to recover it. 00:28:26.177 [2024-12-08 06:32:16.082148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.177 [2024-12-08 06:32:16.082234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.177 [2024-12-08 06:32:16.082261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.177 [2024-12-08 06:32:16.082276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.177 [2024-12-08 06:32:16.082289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.177 [2024-12-08 06:32:16.082318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.177 qpair failed and we were unable to recover it. 00:28:26.177 [2024-12-08 06:32:16.092226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.177 [2024-12-08 06:32:16.092335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.177 [2024-12-08 06:32:16.092371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.177 [2024-12-08 06:32:16.092386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.177 [2024-12-08 06:32:16.092398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.177 [2024-12-08 06:32:16.092429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.177 qpair failed and we were unable to recover it. 00:28:26.177 [2024-12-08 06:32:16.102121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.177 [2024-12-08 06:32:16.102213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.177 [2024-12-08 06:32:16.102236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.177 [2024-12-08 06:32:16.102255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.177 [2024-12-08 06:32:16.102269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.177 [2024-12-08 06:32:16.102299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.177 qpair failed and we were unable to recover it. 00:28:26.177 [2024-12-08 06:32:16.112194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.177 [2024-12-08 06:32:16.112292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.177 [2024-12-08 06:32:16.112317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.177 [2024-12-08 06:32:16.112330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.177 [2024-12-08 06:32:16.112342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.177 [2024-12-08 06:32:16.112371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.177 qpair failed and we were unable to recover it. 00:28:26.177 [2024-12-08 06:32:16.122202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.177 [2024-12-08 06:32:16.122286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.177 [2024-12-08 06:32:16.122309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.178 [2024-12-08 06:32:16.122323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.178 [2024-12-08 06:32:16.122335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.178 [2024-12-08 06:32:16.122365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.178 qpair failed and we were unable to recover it. 00:28:26.178 [2024-12-08 06:32:16.132177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.178 [2024-12-08 06:32:16.132267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.178 [2024-12-08 06:32:16.132308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.178 [2024-12-08 06:32:16.132323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.178 [2024-12-08 06:32:16.132336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.178 [2024-12-08 06:32:16.132365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.178 qpair failed and we were unable to recover it. 00:28:26.178 [2024-12-08 06:32:16.142283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.178 [2024-12-08 06:32:16.142372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.178 [2024-12-08 06:32:16.142397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.178 [2024-12-08 06:32:16.142411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.178 [2024-12-08 06:32:16.142423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.178 [2024-12-08 06:32:16.142452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.178 qpair failed and we were unable to recover it. 00:28:26.178 [2024-12-08 06:32:16.152305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.178 [2024-12-08 06:32:16.152395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.178 [2024-12-08 06:32:16.152419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.178 [2024-12-08 06:32:16.152434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.178 [2024-12-08 06:32:16.152446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.178 [2024-12-08 06:32:16.152475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.178 qpair failed and we were unable to recover it. 00:28:26.178 [2024-12-08 06:32:16.162294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.178 [2024-12-08 06:32:16.162377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.178 [2024-12-08 06:32:16.162402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.178 [2024-12-08 06:32:16.162418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.178 [2024-12-08 06:32:16.162430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.178 [2024-12-08 06:32:16.162460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.178 qpair failed and we were unable to recover it. 00:28:26.178 [2024-12-08 06:32:16.172303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.178 [2024-12-08 06:32:16.172390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.178 [2024-12-08 06:32:16.172414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.178 [2024-12-08 06:32:16.172428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.178 [2024-12-08 06:32:16.172440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.178 [2024-12-08 06:32:16.172470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.178 qpair failed and we were unable to recover it. 00:28:26.178 [2024-12-08 06:32:16.182393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.178 [2024-12-08 06:32:16.182484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.178 [2024-12-08 06:32:16.182507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.178 [2024-12-08 06:32:16.182521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.178 [2024-12-08 06:32:16.182533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.178 [2024-12-08 06:32:16.182562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.178 qpair failed and we were unable to recover it. 00:28:26.178 [2024-12-08 06:32:16.192371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.178 [2024-12-08 06:32:16.192471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.178 [2024-12-08 06:32:16.192496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.178 [2024-12-08 06:32:16.192510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.178 [2024-12-08 06:32:16.192522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.178 [2024-12-08 06:32:16.192553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.178 qpair failed and we were unable to recover it. 00:28:26.178 [2024-12-08 06:32:16.202392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.178 [2024-12-08 06:32:16.202474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.178 [2024-12-08 06:32:16.202498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.178 [2024-12-08 06:32:16.202512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.178 [2024-12-08 06:32:16.202524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.178 [2024-12-08 06:32:16.202553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.178 qpair failed and we were unable to recover it. 00:28:26.178 [2024-12-08 06:32:16.212447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.178 [2024-12-08 06:32:16.212529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.178 [2024-12-08 06:32:16.212554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.178 [2024-12-08 06:32:16.212569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.178 [2024-12-08 06:32:16.212581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.178 [2024-12-08 06:32:16.212610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.178 qpair failed and we were unable to recover it. 00:28:26.178 [2024-12-08 06:32:16.222562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.178 [2024-12-08 06:32:16.222664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.178 [2024-12-08 06:32:16.222689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.178 [2024-12-08 06:32:16.222718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.178 [2024-12-08 06:32:16.222740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.178 [2024-12-08 06:32:16.222778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.178 qpair failed and we were unable to recover it. 00:28:26.178 [2024-12-08 06:32:16.232496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.178 [2024-12-08 06:32:16.232578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.178 [2024-12-08 06:32:16.232602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.178 [2024-12-08 06:32:16.232622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.178 [2024-12-08 06:32:16.232635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.178 [2024-12-08 06:32:16.232665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.178 qpair failed and we were unable to recover it. 00:28:26.178 [2024-12-08 06:32:16.242577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.178 [2024-12-08 06:32:16.242667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.178 [2024-12-08 06:32:16.242692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.178 [2024-12-08 06:32:16.242729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.178 [2024-12-08 06:32:16.242745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.178 [2024-12-08 06:32:16.242777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.178 qpair failed and we were unable to recover it. 00:28:26.178 [2024-12-08 06:32:16.252503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.178 [2024-12-08 06:32:16.252591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.178 [2024-12-08 06:32:16.252615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.178 [2024-12-08 06:32:16.252629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.178 [2024-12-08 06:32:16.252641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.178 [2024-12-08 06:32:16.252671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.178 qpair failed and we were unable to recover it. 00:28:26.179 [2024-12-08 06:32:16.262557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.179 [2024-12-08 06:32:16.262650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.179 [2024-12-08 06:32:16.262674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.179 [2024-12-08 06:32:16.262688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.179 [2024-12-08 06:32:16.262715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.179 [2024-12-08 06:32:16.262755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.179 qpair failed and we were unable to recover it. 00:28:26.179 [2024-12-08 06:32:16.272615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.179 [2024-12-08 06:32:16.272704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.179 [2024-12-08 06:32:16.272755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.179 [2024-12-08 06:32:16.272773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.179 [2024-12-08 06:32:16.272785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.179 [2024-12-08 06:32:16.272824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.179 qpair failed and we were unable to recover it. 00:28:26.179 [2024-12-08 06:32:16.282644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.179 [2024-12-08 06:32:16.282740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.179 [2024-12-08 06:32:16.282777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.179 [2024-12-08 06:32:16.282791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.179 [2024-12-08 06:32:16.282804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.179 [2024-12-08 06:32:16.282835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.179 qpair failed and we were unable to recover it. 00:28:26.440 [2024-12-08 06:32:16.292646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.440 [2024-12-08 06:32:16.292745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.440 [2024-12-08 06:32:16.292771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.440 [2024-12-08 06:32:16.292786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.440 [2024-12-08 06:32:16.292799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.440 [2024-12-08 06:32:16.292830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.440 qpair failed and we were unable to recover it. 00:28:26.440 [2024-12-08 06:32:16.302741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.440 [2024-12-08 06:32:16.302856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.440 [2024-12-08 06:32:16.302881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.440 [2024-12-08 06:32:16.302896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.440 [2024-12-08 06:32:16.302909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.440 [2024-12-08 06:32:16.302940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.440 qpair failed and we were unable to recover it. 00:28:26.440 [2024-12-08 06:32:16.312754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.440 [2024-12-08 06:32:16.312848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.440 [2024-12-08 06:32:16.312874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.440 [2024-12-08 06:32:16.312888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.440 [2024-12-08 06:32:16.312901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.440 [2024-12-08 06:32:16.312931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.440 qpair failed and we were unable to recover it. 00:28:26.440 [2024-12-08 06:32:16.322766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.440 [2024-12-08 06:32:16.322906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.440 [2024-12-08 06:32:16.322943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.440 [2024-12-08 06:32:16.322957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.440 [2024-12-08 06:32:16.322970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.440 [2024-12-08 06:32:16.323001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.440 qpair failed and we were unable to recover it. 00:28:26.440 [2024-12-08 06:32:16.332828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.440 [2024-12-08 06:32:16.332923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.440 [2024-12-08 06:32:16.332947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.440 [2024-12-08 06:32:16.332962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.440 [2024-12-08 06:32:16.332974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.440 [2024-12-08 06:32:16.333005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.440 qpair failed and we were unable to recover it. 00:28:26.440 [2024-12-08 06:32:16.342832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.440 [2024-12-08 06:32:16.342930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.440 [2024-12-08 06:32:16.342955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.440 [2024-12-08 06:32:16.342970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.440 [2024-12-08 06:32:16.342982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.440 [2024-12-08 06:32:16.343028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.440 qpair failed and we were unable to recover it. 00:28:26.440 [2024-12-08 06:32:16.352842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.440 [2024-12-08 06:32:16.352930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.440 [2024-12-08 06:32:16.352955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.440 [2024-12-08 06:32:16.352970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.440 [2024-12-08 06:32:16.352982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.440 [2024-12-08 06:32:16.353012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.440 qpair failed and we were unable to recover it. 00:28:26.440 [2024-12-08 06:32:16.362834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.440 [2024-12-08 06:32:16.362925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.440 [2024-12-08 06:32:16.362955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.440 [2024-12-08 06:32:16.362971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.440 [2024-12-08 06:32:16.362983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.440 [2024-12-08 06:32:16.363015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.440 qpair failed and we were unable to recover it. 00:28:26.440 [2024-12-08 06:32:16.372901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.440 [2024-12-08 06:32:16.372993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.440 [2024-12-08 06:32:16.373033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.440 [2024-12-08 06:32:16.373047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.440 [2024-12-08 06:32:16.373060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.440 [2024-12-08 06:32:16.373089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.440 qpair failed and we were unable to recover it. 00:28:26.440 [2024-12-08 06:32:16.382892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.440 [2024-12-08 06:32:16.383022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.440 [2024-12-08 06:32:16.383046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.440 [2024-12-08 06:32:16.383061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.440 [2024-12-08 06:32:16.383074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.440 [2024-12-08 06:32:16.383103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.440 qpair failed and we were unable to recover it. 00:28:26.440 [2024-12-08 06:32:16.392977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.440 [2024-12-08 06:32:16.393084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.440 [2024-12-08 06:32:16.393109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.440 [2024-12-08 06:32:16.393124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.440 [2024-12-08 06:32:16.393136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.440 [2024-12-08 06:32:16.393166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.440 qpair failed and we were unable to recover it. 00:28:26.440 [2024-12-08 06:32:16.403022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.440 [2024-12-08 06:32:16.403142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.440 [2024-12-08 06:32:16.403166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.440 [2024-12-08 06:32:16.403180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.440 [2024-12-08 06:32:16.403193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.440 [2024-12-08 06:32:16.403227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.440 qpair failed and we were unable to recover it. 00:28:26.440 [2024-12-08 06:32:16.413047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.440 [2024-12-08 06:32:16.413131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.440 [2024-12-08 06:32:16.413156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.440 [2024-12-08 06:32:16.413171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.440 [2024-12-08 06:32:16.413183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.440 [2024-12-08 06:32:16.413212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.440 qpair failed and we were unable to recover it. 00:28:26.440 [2024-12-08 06:32:16.423074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.440 [2024-12-08 06:32:16.423178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.440 [2024-12-08 06:32:16.423203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.440 [2024-12-08 06:32:16.423217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.441 [2024-12-08 06:32:16.423229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.441 [2024-12-08 06:32:16.423258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.441 qpair failed and we were unable to recover it. 00:28:26.441 [2024-12-08 06:32:16.433102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.441 [2024-12-08 06:32:16.433191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.441 [2024-12-08 06:32:16.433215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.441 [2024-12-08 06:32:16.433230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.441 [2024-12-08 06:32:16.433241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.441 [2024-12-08 06:32:16.433271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.441 qpair failed and we were unable to recover it. 00:28:26.441 [2024-12-08 06:32:16.443125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.441 [2024-12-08 06:32:16.443214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.441 [2024-12-08 06:32:16.443239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.441 [2024-12-08 06:32:16.443253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.441 [2024-12-08 06:32:16.443266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.441 [2024-12-08 06:32:16.443296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.441 qpair failed and we were unable to recover it. 00:28:26.441 [2024-12-08 06:32:16.453147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.441 [2024-12-08 06:32:16.453235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.441 [2024-12-08 06:32:16.453259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.441 [2024-12-08 06:32:16.453273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.441 [2024-12-08 06:32:16.453286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.441 [2024-12-08 06:32:16.453315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.441 qpair failed and we were unable to recover it. 00:28:26.441 [2024-12-08 06:32:16.463153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.441 [2024-12-08 06:32:16.463264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.441 [2024-12-08 06:32:16.463288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.441 [2024-12-08 06:32:16.463303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.441 [2024-12-08 06:32:16.463315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.441 [2024-12-08 06:32:16.463345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.441 qpair failed and we were unable to recover it. 00:28:26.441 [2024-12-08 06:32:16.473162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.441 [2024-12-08 06:32:16.473249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.441 [2024-12-08 06:32:16.473273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.441 [2024-12-08 06:32:16.473287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.441 [2024-12-08 06:32:16.473300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.441 [2024-12-08 06:32:16.473336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.441 qpair failed and we were unable to recover it. 00:28:26.441 [2024-12-08 06:32:16.483178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.441 [2024-12-08 06:32:16.483278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.441 [2024-12-08 06:32:16.483305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.441 [2024-12-08 06:32:16.483321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.441 [2024-12-08 06:32:16.483333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.441 [2024-12-08 06:32:16.483364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.441 qpair failed and we were unable to recover it. 00:28:26.441 [2024-12-08 06:32:16.493217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.441 [2024-12-08 06:32:16.493297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.441 [2024-12-08 06:32:16.493326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.441 [2024-12-08 06:32:16.493341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.441 [2024-12-08 06:32:16.493354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.441 [2024-12-08 06:32:16.493383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.441 qpair failed and we were unable to recover it. 00:28:26.441 [2024-12-08 06:32:16.503321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.441 [2024-12-08 06:32:16.503446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.441 [2024-12-08 06:32:16.503471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.441 [2024-12-08 06:32:16.503487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.441 [2024-12-08 06:32:16.503499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.441 [2024-12-08 06:32:16.503540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.441 qpair failed and we were unable to recover it. 00:28:26.441 [2024-12-08 06:32:16.513314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.441 [2024-12-08 06:32:16.513413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.441 [2024-12-08 06:32:16.513439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.441 [2024-12-08 06:32:16.513453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.441 [2024-12-08 06:32:16.513466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.441 [2024-12-08 06:32:16.513504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.441 qpair failed and we were unable to recover it. 00:28:26.441 [2024-12-08 06:32:16.523312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.441 [2024-12-08 06:32:16.523398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.441 [2024-12-08 06:32:16.523422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.441 [2024-12-08 06:32:16.523437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.441 [2024-12-08 06:32:16.523450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.441 [2024-12-08 06:32:16.523479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.441 qpair failed and we were unable to recover it. 00:28:26.441 [2024-12-08 06:32:16.533396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.441 [2024-12-08 06:32:16.533517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.441 [2024-12-08 06:32:16.533543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.441 [2024-12-08 06:32:16.533557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.441 [2024-12-08 06:32:16.533575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.441 [2024-12-08 06:32:16.533611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.441 qpair failed and we were unable to recover it. 00:28:26.441 [2024-12-08 06:32:16.543408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.441 [2024-12-08 06:32:16.543503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.441 [2024-12-08 06:32:16.543527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.441 [2024-12-08 06:32:16.543542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.441 [2024-12-08 06:32:16.543554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.441 [2024-12-08 06:32:16.543584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.441 qpair failed and we were unable to recover it. 00:28:26.441 [2024-12-08 06:32:16.553430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.441 [2024-12-08 06:32:16.553515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.441 [2024-12-08 06:32:16.553554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.441 [2024-12-08 06:32:16.553568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.441 [2024-12-08 06:32:16.553582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.441 [2024-12-08 06:32:16.553613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.441 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.563451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.702 [2024-12-08 06:32:16.563537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.702 [2024-12-08 06:32:16.563562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.702 [2024-12-08 06:32:16.563577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.702 [2024-12-08 06:32:16.563589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.702 [2024-12-08 06:32:16.563619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.702 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.573476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.702 [2024-12-08 06:32:16.573563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.702 [2024-12-08 06:32:16.573604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.702 [2024-12-08 06:32:16.573619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.702 [2024-12-08 06:32:16.573632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.702 [2024-12-08 06:32:16.573663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.702 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.583536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.702 [2024-12-08 06:32:16.583645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.702 [2024-12-08 06:32:16.583671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.702 [2024-12-08 06:32:16.583685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.702 [2024-12-08 06:32:16.583713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.702 [2024-12-08 06:32:16.583757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.702 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.593559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.702 [2024-12-08 06:32:16.593656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.702 [2024-12-08 06:32:16.593681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.702 [2024-12-08 06:32:16.593696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.702 [2024-12-08 06:32:16.593733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.702 [2024-12-08 06:32:16.593767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.702 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.603545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.702 [2024-12-08 06:32:16.603626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.702 [2024-12-08 06:32:16.603650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.702 [2024-12-08 06:32:16.603664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.702 [2024-12-08 06:32:16.603676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.702 [2024-12-08 06:32:16.603731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.702 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.613644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.702 [2024-12-08 06:32:16.613770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.702 [2024-12-08 06:32:16.613795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.702 [2024-12-08 06:32:16.613810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.702 [2024-12-08 06:32:16.613822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.702 [2024-12-08 06:32:16.613853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.702 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.623643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.702 [2024-12-08 06:32:16.623756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.702 [2024-12-08 06:32:16.623788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.702 [2024-12-08 06:32:16.623804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.702 [2024-12-08 06:32:16.623817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.702 [2024-12-08 06:32:16.623848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.702 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.633672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.702 [2024-12-08 06:32:16.633791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.702 [2024-12-08 06:32:16.633817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.702 [2024-12-08 06:32:16.633832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.702 [2024-12-08 06:32:16.633844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.702 [2024-12-08 06:32:16.633876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.702 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.643676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.702 [2024-12-08 06:32:16.643783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.702 [2024-12-08 06:32:16.643809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.702 [2024-12-08 06:32:16.643824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.702 [2024-12-08 06:32:16.643836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.702 [2024-12-08 06:32:16.643867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.702 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.653730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.702 [2024-12-08 06:32:16.653821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.702 [2024-12-08 06:32:16.653846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.702 [2024-12-08 06:32:16.653861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.702 [2024-12-08 06:32:16.653873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.702 [2024-12-08 06:32:16.653904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.702 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.663776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.702 [2024-12-08 06:32:16.663899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.702 [2024-12-08 06:32:16.663924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.702 [2024-12-08 06:32:16.663944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.702 [2024-12-08 06:32:16.663958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.702 [2024-12-08 06:32:16.663990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.702 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.673791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.702 [2024-12-08 06:32:16.673882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.702 [2024-12-08 06:32:16.673908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.702 [2024-12-08 06:32:16.673923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.702 [2024-12-08 06:32:16.673936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.702 [2024-12-08 06:32:16.673967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.702 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.683798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.702 [2024-12-08 06:32:16.683940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.702 [2024-12-08 06:32:16.683967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.702 [2024-12-08 06:32:16.683982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.702 [2024-12-08 06:32:16.683996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.702 [2024-12-08 06:32:16.684042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.702 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.693801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.702 [2024-12-08 06:32:16.693893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.702 [2024-12-08 06:32:16.693918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.702 [2024-12-08 06:32:16.693933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.702 [2024-12-08 06:32:16.693946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.702 [2024-12-08 06:32:16.693984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.702 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.703848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.702 [2024-12-08 06:32:16.703957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.702 [2024-12-08 06:32:16.703982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.702 [2024-12-08 06:32:16.703997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.702 [2024-12-08 06:32:16.704009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.702 [2024-12-08 06:32:16.704054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.702 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.713836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.702 [2024-12-08 06:32:16.713929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.702 [2024-12-08 06:32:16.713954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.702 [2024-12-08 06:32:16.713969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.702 [2024-12-08 06:32:16.713982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.702 [2024-12-08 06:32:16.714012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.702 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.723866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.702 [2024-12-08 06:32:16.723959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.702 [2024-12-08 06:32:16.723984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.702 [2024-12-08 06:32:16.724013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.702 [2024-12-08 06:32:16.724026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.702 [2024-12-08 06:32:16.724056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.702 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.733940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.702 [2024-12-08 06:32:16.734041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.702 [2024-12-08 06:32:16.734068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.702 [2024-12-08 06:32:16.734083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.702 [2024-12-08 06:32:16.734096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.702 [2024-12-08 06:32:16.734125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.702 qpair failed and we were unable to recover it. 00:28:26.702 [2024-12-08 06:32:16.743943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.703 [2024-12-08 06:32:16.744071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.703 [2024-12-08 06:32:16.744097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.703 [2024-12-08 06:32:16.744112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.703 [2024-12-08 06:32:16.744124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.703 [2024-12-08 06:32:16.744154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.703 qpair failed and we were unable to recover it. 00:28:26.703 [2024-12-08 06:32:16.753970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.703 [2024-12-08 06:32:16.754076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.703 [2024-12-08 06:32:16.754100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.703 [2024-12-08 06:32:16.754114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.703 [2024-12-08 06:32:16.754127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.703 [2024-12-08 06:32:16.754157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.703 qpair failed and we were unable to recover it. 00:28:26.703 [2024-12-08 06:32:16.764039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.703 [2024-12-08 06:32:16.764149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.703 [2024-12-08 06:32:16.764173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.703 [2024-12-08 06:32:16.764187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.703 [2024-12-08 06:32:16.764200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.703 [2024-12-08 06:32:16.764230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.703 qpair failed and we were unable to recover it. 00:28:26.703 [2024-12-08 06:32:16.774039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.703 [2024-12-08 06:32:16.774170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.703 [2024-12-08 06:32:16.774196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.703 [2024-12-08 06:32:16.774211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.703 [2024-12-08 06:32:16.774223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.703 [2024-12-08 06:32:16.774252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.703 qpair failed and we were unable to recover it. 00:28:26.703 [2024-12-08 06:32:16.784097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.703 [2024-12-08 06:32:16.784201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.703 [2024-12-08 06:32:16.784225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.703 [2024-12-08 06:32:16.784240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.703 [2024-12-08 06:32:16.784252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.703 [2024-12-08 06:32:16.784282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.703 qpair failed and we were unable to recover it. 00:28:26.703 [2024-12-08 06:32:16.794277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.703 [2024-12-08 06:32:16.794375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.703 [2024-12-08 06:32:16.794399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.703 [2024-12-08 06:32:16.794418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.703 [2024-12-08 06:32:16.794431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.703 [2024-12-08 06:32:16.794461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.703 qpair failed and we were unable to recover it. 00:28:26.703 [2024-12-08 06:32:16.804181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.703 [2024-12-08 06:32:16.804282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.703 [2024-12-08 06:32:16.804306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.703 [2024-12-08 06:32:16.804321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.703 [2024-12-08 06:32:16.804334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.703 [2024-12-08 06:32:16.804363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.703 qpair failed and we were unable to recover it. 00:28:26.703 [2024-12-08 06:32:16.814187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.703 [2024-12-08 06:32:16.814268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.703 [2024-12-08 06:32:16.814292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.703 [2024-12-08 06:32:16.814305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.703 [2024-12-08 06:32:16.814317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.703 [2024-12-08 06:32:16.814347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.703 qpair failed and we were unable to recover it. 00:28:26.964 [2024-12-08 06:32:16.824318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.964 [2024-12-08 06:32:16.824415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.964 [2024-12-08 06:32:16.824440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.964 [2024-12-08 06:32:16.824454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.964 [2024-12-08 06:32:16.824467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.964 [2024-12-08 06:32:16.824509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.964 qpair failed and we were unable to recover it. 00:28:26.964 [2024-12-08 06:32:16.834200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.964 [2024-12-08 06:32:16.834294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.964 [2024-12-08 06:32:16.834321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.964 [2024-12-08 06:32:16.834352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.964 [2024-12-08 06:32:16.834366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.964 [2024-12-08 06:32:16.834401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.964 qpair failed and we were unable to recover it. 00:28:26.964 [2024-12-08 06:32:16.844222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.964 [2024-12-08 06:32:16.844342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.964 [2024-12-08 06:32:16.844369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.965 [2024-12-08 06:32:16.844383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.965 [2024-12-08 06:32:16.844397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.965 [2024-12-08 06:32:16.844426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.965 qpair failed and we were unable to recover it. 00:28:26.965 [2024-12-08 06:32:16.854265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.965 [2024-12-08 06:32:16.854349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.965 [2024-12-08 06:32:16.854373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.965 [2024-12-08 06:32:16.854388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.965 [2024-12-08 06:32:16.854401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.965 [2024-12-08 06:32:16.854431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.965 qpair failed and we were unable to recover it. 00:28:26.965 [2024-12-08 06:32:16.864333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.965 [2024-12-08 06:32:16.864437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.965 [2024-12-08 06:32:16.864461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.965 [2024-12-08 06:32:16.864475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.965 [2024-12-08 06:32:16.864488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.965 [2024-12-08 06:32:16.864518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.965 qpair failed and we were unable to recover it. 00:28:26.965 [2024-12-08 06:32:16.874309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.965 [2024-12-08 06:32:16.874393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.965 [2024-12-08 06:32:16.874417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.965 [2024-12-08 06:32:16.874431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.965 [2024-12-08 06:32:16.874444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.965 [2024-12-08 06:32:16.874473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.965 qpair failed and we were unable to recover it. 00:28:26.965 [2024-12-08 06:32:16.884387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.965 [2024-12-08 06:32:16.884491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.965 [2024-12-08 06:32:16.884516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.965 [2024-12-08 06:32:16.884531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.965 [2024-12-08 06:32:16.884543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.965 [2024-12-08 06:32:16.884573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.965 qpair failed and we were unable to recover it. 00:28:26.965 [2024-12-08 06:32:16.894341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.965 [2024-12-08 06:32:16.894433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.965 [2024-12-08 06:32:16.894459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.965 [2024-12-08 06:32:16.894473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.965 [2024-12-08 06:32:16.894485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.965 [2024-12-08 06:32:16.894514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.965 qpair failed and we were unable to recover it. 00:28:26.965 [2024-12-08 06:32:16.904387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.965 [2024-12-08 06:32:16.904496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.965 [2024-12-08 06:32:16.904521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.965 [2024-12-08 06:32:16.904536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.965 [2024-12-08 06:32:16.904548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.965 [2024-12-08 06:32:16.904577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.965 qpair failed and we were unable to recover it. 00:28:26.965 [2024-12-08 06:32:16.914440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.965 [2024-12-08 06:32:16.914537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.965 [2024-12-08 06:32:16.914562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.965 [2024-12-08 06:32:16.914576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.965 [2024-12-08 06:32:16.914588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.965 [2024-12-08 06:32:16.914618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.965 qpair failed and we were unable to recover it. 00:28:26.965 [2024-12-08 06:32:16.924494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.965 [2024-12-08 06:32:16.924584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.965 [2024-12-08 06:32:16.924614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.965 [2024-12-08 06:32:16.924630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.965 [2024-12-08 06:32:16.924643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.965 [2024-12-08 06:32:16.924675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.965 qpair failed and we were unable to recover it. 00:28:26.965 [2024-12-08 06:32:16.934511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.965 [2024-12-08 06:32:16.934600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.965 [2024-12-08 06:32:16.934624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.965 [2024-12-08 06:32:16.934638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.965 [2024-12-08 06:32:16.934651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.965 [2024-12-08 06:32:16.934680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.965 qpair failed and we were unable to recover it. 00:28:26.965 [2024-12-08 06:32:16.944502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.965 [2024-12-08 06:32:16.944613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.965 [2024-12-08 06:32:16.944637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.965 [2024-12-08 06:32:16.944651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.965 [2024-12-08 06:32:16.944665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.965 [2024-12-08 06:32:16.944694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.965 qpair failed and we were unable to recover it. 00:28:26.965 [2024-12-08 06:32:16.954527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.965 [2024-12-08 06:32:16.954610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.965 [2024-12-08 06:32:16.954634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.965 [2024-12-08 06:32:16.954649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.965 [2024-12-08 06:32:16.954661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.965 [2024-12-08 06:32:16.954691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.965 qpair failed and we were unable to recover it. 00:28:26.965 [2024-12-08 06:32:16.964542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.965 [2024-12-08 06:32:16.964627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.965 [2024-12-08 06:32:16.964650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.965 [2024-12-08 06:32:16.964664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.965 [2024-12-08 06:32:16.964683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.965 [2024-12-08 06:32:16.964736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.965 qpair failed and we were unable to recover it. 00:28:26.965 [2024-12-08 06:32:16.974560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.965 [2024-12-08 06:32:16.974642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.965 [2024-12-08 06:32:16.974666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.966 [2024-12-08 06:32:16.974680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.966 [2024-12-08 06:32:16.974693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.966 [2024-12-08 06:32:16.974747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.966 qpair failed and we were unable to recover it. 00:28:26.966 [2024-12-08 06:32:16.984657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.966 [2024-12-08 06:32:16.984768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.966 [2024-12-08 06:32:16.984792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.966 [2024-12-08 06:32:16.984807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.966 [2024-12-08 06:32:16.984820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.966 [2024-12-08 06:32:16.984850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.966 qpair failed and we were unable to recover it. 00:28:26.966 [2024-12-08 06:32:16.994698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.966 [2024-12-08 06:32:16.994806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.966 [2024-12-08 06:32:16.994831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.966 [2024-12-08 06:32:16.994845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.966 [2024-12-08 06:32:16.994858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.966 [2024-12-08 06:32:16.994887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.966 qpair failed and we were unable to recover it. 00:28:26.966 [2024-12-08 06:32:17.004714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.966 [2024-12-08 06:32:17.004845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.966 [2024-12-08 06:32:17.004871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.966 [2024-12-08 06:32:17.004886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.966 [2024-12-08 06:32:17.004900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.966 [2024-12-08 06:32:17.004931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.966 qpair failed and we were unable to recover it. 00:28:26.966 [2024-12-08 06:32:17.014747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.966 [2024-12-08 06:32:17.014836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.966 [2024-12-08 06:32:17.014861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.966 [2024-12-08 06:32:17.014876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.966 [2024-12-08 06:32:17.014888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.966 [2024-12-08 06:32:17.014919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.966 qpair failed and we were unable to recover it. 00:28:26.966 [2024-12-08 06:32:17.024793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.966 [2024-12-08 06:32:17.024890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.966 [2024-12-08 06:32:17.024914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.966 [2024-12-08 06:32:17.024929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.966 [2024-12-08 06:32:17.024941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.966 [2024-12-08 06:32:17.024973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.966 qpair failed and we were unable to recover it. 00:28:26.966 [2024-12-08 06:32:17.034767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.966 [2024-12-08 06:32:17.034861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.966 [2024-12-08 06:32:17.034886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.966 [2024-12-08 06:32:17.034901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.966 [2024-12-08 06:32:17.034915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.966 [2024-12-08 06:32:17.034945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.966 qpair failed and we were unable to recover it. 00:28:26.966 [2024-12-08 06:32:17.044821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.966 [2024-12-08 06:32:17.044910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.966 [2024-12-08 06:32:17.044935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.966 [2024-12-08 06:32:17.044950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.966 [2024-12-08 06:32:17.044963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.966 [2024-12-08 06:32:17.044994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.966 qpair failed and we were unable to recover it. 00:28:26.966 [2024-12-08 06:32:17.054840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.966 [2024-12-08 06:32:17.054929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.966 [2024-12-08 06:32:17.054959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.966 [2024-12-08 06:32:17.054975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.966 [2024-12-08 06:32:17.054988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.966 [2024-12-08 06:32:17.055033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.966 qpair failed and we were unable to recover it. 00:28:26.966 [2024-12-08 06:32:17.064911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.966 [2024-12-08 06:32:17.065020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.966 [2024-12-08 06:32:17.065044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.966 [2024-12-08 06:32:17.065059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.966 [2024-12-08 06:32:17.065072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.966 [2024-12-08 06:32:17.065101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.966 qpair failed and we were unable to recover it. 00:28:26.966 [2024-12-08 06:32:17.074946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:26.966 [2024-12-08 06:32:17.075048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:26.966 [2024-12-08 06:32:17.075073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:26.966 [2024-12-08 06:32:17.075087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:26.966 [2024-12-08 06:32:17.075114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:26.966 [2024-12-08 06:32:17.075157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.966 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-08 06:32:17.084948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.225 [2024-12-08 06:32:17.085062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.225 [2024-12-08 06:32:17.085086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.225 [2024-12-08 06:32:17.085101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.225 [2024-12-08 06:32:17.085113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.225 [2024-12-08 06:32:17.085144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-08 06:32:17.094967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.225 [2024-12-08 06:32:17.095079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.225 [2024-12-08 06:32:17.095103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.225 [2024-12-08 06:32:17.095118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.225 [2024-12-08 06:32:17.095136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.225 [2024-12-08 06:32:17.095168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-08 06:32:17.105058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.225 [2024-12-08 06:32:17.105149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.225 [2024-12-08 06:32:17.105175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.225 [2024-12-08 06:32:17.105191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.225 [2024-12-08 06:32:17.105204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.225 [2024-12-08 06:32:17.105240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-08 06:32:17.115053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.225 [2024-12-08 06:32:17.115139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.225 [2024-12-08 06:32:17.115164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.225 [2024-12-08 06:32:17.115179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.225 [2024-12-08 06:32:17.115191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.225 [2024-12-08 06:32:17.115220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-08 06:32:17.125045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.225 [2024-12-08 06:32:17.125162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.225 [2024-12-08 06:32:17.125186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.225 [2024-12-08 06:32:17.125200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.225 [2024-12-08 06:32:17.125213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.225 [2024-12-08 06:32:17.125242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-08 06:32:17.135123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.225 [2024-12-08 06:32:17.135207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.225 [2024-12-08 06:32:17.135231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.225 [2024-12-08 06:32:17.135245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.225 [2024-12-08 06:32:17.135258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.225 [2024-12-08 06:32:17.135287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-08 06:32:17.145176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.225 [2024-12-08 06:32:17.145295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.225 [2024-12-08 06:32:17.145319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.225 [2024-12-08 06:32:17.145333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.225 [2024-12-08 06:32:17.145346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.225 [2024-12-08 06:32:17.145376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-08 06:32:17.155141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.225 [2024-12-08 06:32:17.155225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.225 [2024-12-08 06:32:17.155249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.225 [2024-12-08 06:32:17.155263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.225 [2024-12-08 06:32:17.155276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.225 [2024-12-08 06:32:17.155306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-08 06:32:17.165215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.225 [2024-12-08 06:32:17.165313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.225 [2024-12-08 06:32:17.165337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.225 [2024-12-08 06:32:17.165352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.225 [2024-12-08 06:32:17.165364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.225 [2024-12-08 06:32:17.165394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-08 06:32:17.175200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.225 [2024-12-08 06:32:17.175329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.225 [2024-12-08 06:32:17.175354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.225 [2024-12-08 06:32:17.175369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.225 [2024-12-08 06:32:17.175381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.225 [2024-12-08 06:32:17.175411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-08 06:32:17.185207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.225 [2024-12-08 06:32:17.185298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.225 [2024-12-08 06:32:17.185327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.225 [2024-12-08 06:32:17.185342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.225 [2024-12-08 06:32:17.185354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.225 [2024-12-08 06:32:17.185385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.225 qpair failed and we were unable to recover it. 00:28:27.225 [2024-12-08 06:32:17.195252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.226 [2024-12-08 06:32:17.195346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.226 [2024-12-08 06:32:17.195371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.226 [2024-12-08 06:32:17.195385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.226 [2024-12-08 06:32:17.195398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.226 [2024-12-08 06:32:17.195427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-08 06:32:17.205287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.226 [2024-12-08 06:32:17.205377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.226 [2024-12-08 06:32:17.205401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.226 [2024-12-08 06:32:17.205416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.226 [2024-12-08 06:32:17.205429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.226 [2024-12-08 06:32:17.205458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-08 06:32:17.215319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.226 [2024-12-08 06:32:17.215443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.226 [2024-12-08 06:32:17.215467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.226 [2024-12-08 06:32:17.215481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.226 [2024-12-08 06:32:17.215493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.226 [2024-12-08 06:32:17.215522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-08 06:32:17.225349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.226 [2024-12-08 06:32:17.225439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.226 [2024-12-08 06:32:17.225464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.226 [2024-12-08 06:32:17.225483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.226 [2024-12-08 06:32:17.225496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.226 [2024-12-08 06:32:17.225526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-08 06:32:17.235358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.226 [2024-12-08 06:32:17.235447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.226 [2024-12-08 06:32:17.235473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.226 [2024-12-08 06:32:17.235487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.226 [2024-12-08 06:32:17.235499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.226 [2024-12-08 06:32:17.235528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-08 06:32:17.245358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.226 [2024-12-08 06:32:17.245447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.226 [2024-12-08 06:32:17.245472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.226 [2024-12-08 06:32:17.245487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.226 [2024-12-08 06:32:17.245499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.226 [2024-12-08 06:32:17.245529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-08 06:32:17.255379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.226 [2024-12-08 06:32:17.255480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.226 [2024-12-08 06:32:17.255505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.226 [2024-12-08 06:32:17.255520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.226 [2024-12-08 06:32:17.255532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.226 [2024-12-08 06:32:17.255561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-08 06:32:17.265433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.226 [2024-12-08 06:32:17.265526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.226 [2024-12-08 06:32:17.265550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.226 [2024-12-08 06:32:17.265564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.226 [2024-12-08 06:32:17.265577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.226 [2024-12-08 06:32:17.265606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-08 06:32:17.275444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.226 [2024-12-08 06:32:17.275531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.226 [2024-12-08 06:32:17.275557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.226 [2024-12-08 06:32:17.275572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.226 [2024-12-08 06:32:17.275584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.226 [2024-12-08 06:32:17.275613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-08 06:32:17.285520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.226 [2024-12-08 06:32:17.285639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.226 [2024-12-08 06:32:17.285665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.226 [2024-12-08 06:32:17.285679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.226 [2024-12-08 06:32:17.285692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.226 [2024-12-08 06:32:17.285744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-08 06:32:17.295526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.226 [2024-12-08 06:32:17.295607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.226 [2024-12-08 06:32:17.295631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.226 [2024-12-08 06:32:17.295644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.226 [2024-12-08 06:32:17.295657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.226 [2024-12-08 06:32:17.295686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-08 06:32:17.305539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.226 [2024-12-08 06:32:17.305626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.226 [2024-12-08 06:32:17.305650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.226 [2024-12-08 06:32:17.305664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.226 [2024-12-08 06:32:17.305677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.226 [2024-12-08 06:32:17.305734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-08 06:32:17.315537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.226 [2024-12-08 06:32:17.315648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.226 [2024-12-08 06:32:17.315674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.226 [2024-12-08 06:32:17.315688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.226 [2024-12-08 06:32:17.315701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.226 [2024-12-08 06:32:17.315739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-08 06:32:17.325582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.226 [2024-12-08 06:32:17.325708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.226 [2024-12-08 06:32:17.325742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.226 [2024-12-08 06:32:17.325758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.226 [2024-12-08 06:32:17.325771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.226 [2024-12-08 06:32:17.325802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.226 [2024-12-08 06:32:17.335608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.226 [2024-12-08 06:32:17.335712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.226 [2024-12-08 06:32:17.335744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.226 [2024-12-08 06:32:17.335760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.226 [2024-12-08 06:32:17.335773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.226 [2024-12-08 06:32:17.335804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.226 qpair failed and we were unable to recover it. 00:28:27.484 [2024-12-08 06:32:17.345642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.484 [2024-12-08 06:32:17.345753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.484 [2024-12-08 06:32:17.345778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.484 [2024-12-08 06:32:17.345793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.484 [2024-12-08 06:32:17.345807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.484 [2024-12-08 06:32:17.345837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.484 qpair failed and we were unable to recover it. 00:28:27.484 [2024-12-08 06:32:17.355666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.484 [2024-12-08 06:32:17.355793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.484 [2024-12-08 06:32:17.355820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.484 [2024-12-08 06:32:17.355841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.484 [2024-12-08 06:32:17.355854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.484 [2024-12-08 06:32:17.355884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.484 qpair failed and we were unable to recover it. 00:28:27.484 [2024-12-08 06:32:17.365829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.484 [2024-12-08 06:32:17.365959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.484 [2024-12-08 06:32:17.365985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.484 [2024-12-08 06:32:17.366000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.484 [2024-12-08 06:32:17.366012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.484 [2024-12-08 06:32:17.366057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.484 qpair failed and we were unable to recover it. 00:28:27.484 [2024-12-08 06:32:17.375754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.484 [2024-12-08 06:32:17.375841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.484 [2024-12-08 06:32:17.375866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.485 [2024-12-08 06:32:17.375881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.485 [2024-12-08 06:32:17.375893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.485 [2024-12-08 06:32:17.375924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.485 qpair failed and we were unable to recover it. 00:28:27.485 [2024-12-08 06:32:17.385855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.485 [2024-12-08 06:32:17.385998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.485 [2024-12-08 06:32:17.386040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.485 [2024-12-08 06:32:17.386055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.485 [2024-12-08 06:32:17.386067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.485 [2024-12-08 06:32:17.386097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.485 qpair failed and we were unable to recover it. 00:28:27.485 [2024-12-08 06:32:17.395806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.485 [2024-12-08 06:32:17.395894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.485 [2024-12-08 06:32:17.395919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.485 [2024-12-08 06:32:17.395934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.485 [2024-12-08 06:32:17.395947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.485 [2024-12-08 06:32:17.395982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.485 qpair failed and we were unable to recover it. 00:28:27.485 [2024-12-08 06:32:17.405845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.485 [2024-12-08 06:32:17.405930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.485 [2024-12-08 06:32:17.405955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.485 [2024-12-08 06:32:17.405970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.485 [2024-12-08 06:32:17.405983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.485 [2024-12-08 06:32:17.406013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.485 qpair failed and we were unable to recover it. 00:28:27.485 [2024-12-08 06:32:17.415853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.485 [2024-12-08 06:32:17.415936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.485 [2024-12-08 06:32:17.415961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.485 [2024-12-08 06:32:17.415976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.485 [2024-12-08 06:32:17.415989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.485 [2024-12-08 06:32:17.416034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.485 qpair failed and we were unable to recover it. 00:28:27.485 [2024-12-08 06:32:17.425900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.485 [2024-12-08 06:32:17.425992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.485 [2024-12-08 06:32:17.426033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.485 [2024-12-08 06:32:17.426048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.485 [2024-12-08 06:32:17.426060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.485 [2024-12-08 06:32:17.426090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.485 qpair failed and we were unable to recover it. 00:28:27.485 [2024-12-08 06:32:17.435920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.485 [2024-12-08 06:32:17.436038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.485 [2024-12-08 06:32:17.436063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.485 [2024-12-08 06:32:17.436077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.485 [2024-12-08 06:32:17.436090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.485 [2024-12-08 06:32:17.436120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.485 qpair failed and we were unable to recover it. 00:28:27.485 [2024-12-08 06:32:17.445955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.485 [2024-12-08 06:32:17.446054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.485 [2024-12-08 06:32:17.446079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.485 [2024-12-08 06:32:17.446093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.485 [2024-12-08 06:32:17.446105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.485 [2024-12-08 06:32:17.446134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.485 qpair failed and we were unable to recover it. 00:28:27.485 [2024-12-08 06:32:17.455955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.485 [2024-12-08 06:32:17.456058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.485 [2024-12-08 06:32:17.456082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.485 [2024-12-08 06:32:17.456096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.485 [2024-12-08 06:32:17.456108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.485 [2024-12-08 06:32:17.456137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.485 qpair failed and we were unable to recover it. 00:28:27.485 [2024-12-08 06:32:17.466029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.485 [2024-12-08 06:32:17.466135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.485 [2024-12-08 06:32:17.466160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.485 [2024-12-08 06:32:17.466175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.485 [2024-12-08 06:32:17.466188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.485 [2024-12-08 06:32:17.466217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.485 qpair failed and we were unable to recover it. 00:28:27.485 [2024-12-08 06:32:17.476124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.485 [2024-12-08 06:32:17.476253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.485 [2024-12-08 06:32:17.476278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.485 [2024-12-08 06:32:17.476294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.485 [2024-12-08 06:32:17.476306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.485 [2024-12-08 06:32:17.476336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.485 qpair failed and we were unable to recover it. 00:28:27.485 [2024-12-08 06:32:17.486077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.485 [2024-12-08 06:32:17.486160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.485 [2024-12-08 06:32:17.486189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.485 [2024-12-08 06:32:17.486204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.485 [2024-12-08 06:32:17.486217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.485 [2024-12-08 06:32:17.486247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.485 qpair failed and we were unable to recover it. 00:28:27.485 [2024-12-08 06:32:17.496089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.485 [2024-12-08 06:32:17.496209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.485 [2024-12-08 06:32:17.496235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.485 [2024-12-08 06:32:17.496250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.485 [2024-12-08 06:32:17.496262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.485 [2024-12-08 06:32:17.496291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.485 qpair failed and we were unable to recover it. 00:28:27.485 [2024-12-08 06:32:17.506124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.485 [2024-12-08 06:32:17.506217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.485 [2024-12-08 06:32:17.506242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.486 [2024-12-08 06:32:17.506256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.486 [2024-12-08 06:32:17.506269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.486 [2024-12-08 06:32:17.506298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.486 qpair failed and we were unable to recover it. 00:28:27.486 [2024-12-08 06:32:17.516146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.486 [2024-12-08 06:32:17.516234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.486 [2024-12-08 06:32:17.516258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.486 [2024-12-08 06:32:17.516272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.486 [2024-12-08 06:32:17.516284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.486 [2024-12-08 06:32:17.516313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.486 qpair failed and we were unable to recover it. 00:28:27.486 [2024-12-08 06:32:17.526192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.486 [2024-12-08 06:32:17.526272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.486 [2024-12-08 06:32:17.526296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.486 [2024-12-08 06:32:17.526310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.486 [2024-12-08 06:32:17.526328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.486 [2024-12-08 06:32:17.526358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.486 qpair failed and we were unable to recover it. 00:28:27.486 [2024-12-08 06:32:17.536195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.486 [2024-12-08 06:32:17.536325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.486 [2024-12-08 06:32:17.536350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.486 [2024-12-08 06:32:17.536365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.486 [2024-12-08 06:32:17.536378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.486 [2024-12-08 06:32:17.536407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.486 qpair failed and we were unable to recover it. 00:28:27.486 [2024-12-08 06:32:17.546288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.486 [2024-12-08 06:32:17.546385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.486 [2024-12-08 06:32:17.546408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.486 [2024-12-08 06:32:17.546424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.486 [2024-12-08 06:32:17.546436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.486 [2024-12-08 06:32:17.546465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.486 qpair failed and we were unable to recover it. 00:28:27.486 [2024-12-08 06:32:17.556278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.486 [2024-12-08 06:32:17.556360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.486 [2024-12-08 06:32:17.556383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.486 [2024-12-08 06:32:17.556397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.486 [2024-12-08 06:32:17.556410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.486 [2024-12-08 06:32:17.556439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.486 qpair failed and we were unable to recover it. 00:28:27.486 [2024-12-08 06:32:17.566345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.486 [2024-12-08 06:32:17.566426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.486 [2024-12-08 06:32:17.566450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.486 [2024-12-08 06:32:17.566463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.486 [2024-12-08 06:32:17.566476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.486 [2024-12-08 06:32:17.566505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.486 qpair failed and we were unable to recover it. 00:28:27.486 [2024-12-08 06:32:17.576401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.486 [2024-12-08 06:32:17.576485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.486 [2024-12-08 06:32:17.576525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.486 [2024-12-08 06:32:17.576539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.486 [2024-12-08 06:32:17.576552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.486 [2024-12-08 06:32:17.576581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.486 qpair failed and we were unable to recover it. 00:28:27.486 [2024-12-08 06:32:17.586411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.486 [2024-12-08 06:32:17.586512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.486 [2024-12-08 06:32:17.586536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.486 [2024-12-08 06:32:17.586550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.486 [2024-12-08 06:32:17.586562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.486 [2024-12-08 06:32:17.586592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.486 qpair failed and we were unable to recover it. 00:28:27.486 [2024-12-08 06:32:17.596387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.486 [2024-12-08 06:32:17.596473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.486 [2024-12-08 06:32:17.596497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.486 [2024-12-08 06:32:17.596512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.486 [2024-12-08 06:32:17.596524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.486 [2024-12-08 06:32:17.596553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.486 qpair failed and we were unable to recover it. 00:28:27.744 [2024-12-08 06:32:17.606499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.744 [2024-12-08 06:32:17.606628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.744 [2024-12-08 06:32:17.606654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.744 [2024-12-08 06:32:17.606669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.744 [2024-12-08 06:32:17.606681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.744 [2024-12-08 06:32:17.606734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.744 qpair failed and we were unable to recover it. 00:28:27.744 [2024-12-08 06:32:17.616427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.744 [2024-12-08 06:32:17.616512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.744 [2024-12-08 06:32:17.616541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.744 [2024-12-08 06:32:17.616556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.744 [2024-12-08 06:32:17.616569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.744 [2024-12-08 06:32:17.616599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.744 qpair failed and we were unable to recover it. 00:28:27.744 [2024-12-08 06:32:17.626478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.744 [2024-12-08 06:32:17.626567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.744 [2024-12-08 06:32:17.626591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.744 [2024-12-08 06:32:17.626605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.744 [2024-12-08 06:32:17.626617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.745 [2024-12-08 06:32:17.626646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.745 qpair failed and we were unable to recover it. 00:28:27.745 [2024-12-08 06:32:17.636473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.745 [2024-12-08 06:32:17.636589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.745 [2024-12-08 06:32:17.636616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.745 [2024-12-08 06:32:17.636630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.745 [2024-12-08 06:32:17.636642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.745 [2024-12-08 06:32:17.636671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.745 qpair failed and we were unable to recover it. 00:28:27.745 [2024-12-08 06:32:17.646512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.745 [2024-12-08 06:32:17.646594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.745 [2024-12-08 06:32:17.646618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.745 [2024-12-08 06:32:17.646634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.745 [2024-12-08 06:32:17.646646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.745 [2024-12-08 06:32:17.646676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.745 qpair failed and we were unable to recover it. 00:28:27.745 [2024-12-08 06:32:17.656593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.745 [2024-12-08 06:32:17.656698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.745 [2024-12-08 06:32:17.656747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.745 [2024-12-08 06:32:17.656763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.745 [2024-12-08 06:32:17.656781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.745 [2024-12-08 06:32:17.656813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.745 qpair failed and we were unable to recover it. 00:28:27.745 [2024-12-08 06:32:17.666583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.745 [2024-12-08 06:32:17.666671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.745 [2024-12-08 06:32:17.666697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.745 [2024-12-08 06:32:17.666734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.745 [2024-12-08 06:32:17.666748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.745 [2024-12-08 06:32:17.666780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.745 qpair failed and we were unable to recover it. 00:28:27.745 [2024-12-08 06:32:17.676613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.745 [2024-12-08 06:32:17.676704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.745 [2024-12-08 06:32:17.676751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.745 [2024-12-08 06:32:17.676768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.745 [2024-12-08 06:32:17.676781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.745 [2024-12-08 06:32:17.676817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.745 qpair failed and we were unable to recover it. 00:28:27.745 [2024-12-08 06:32:17.686630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.745 [2024-12-08 06:32:17.686734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.745 [2024-12-08 06:32:17.686759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.745 [2024-12-08 06:32:17.686774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.745 [2024-12-08 06:32:17.686787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.745 [2024-12-08 06:32:17.686817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.745 qpair failed and we were unable to recover it. 00:28:27.745 [2024-12-08 06:32:17.696639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.745 [2024-12-08 06:32:17.696754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.745 [2024-12-08 06:32:17.696781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.745 [2024-12-08 06:32:17.696797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.745 [2024-12-08 06:32:17.696809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.745 [2024-12-08 06:32:17.696840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.745 qpair failed and we were unable to recover it. 00:28:27.745 [2024-12-08 06:32:17.706699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.745 [2024-12-08 06:32:17.706825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.745 [2024-12-08 06:32:17.706851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.745 [2024-12-08 06:32:17.706866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.745 [2024-12-08 06:32:17.706879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.745 [2024-12-08 06:32:17.706910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.745 qpair failed and we were unable to recover it. 00:28:27.745 [2024-12-08 06:32:17.716749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.745 [2024-12-08 06:32:17.716850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.745 [2024-12-08 06:32:17.716874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.745 [2024-12-08 06:32:17.716888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.745 [2024-12-08 06:32:17.716901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.745 [2024-12-08 06:32:17.716931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.745 qpair failed and we were unable to recover it. 00:28:27.745 [2024-12-08 06:32:17.726759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.745 [2024-12-08 06:32:17.726846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.745 [2024-12-08 06:32:17.726873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.745 [2024-12-08 06:32:17.726887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.745 [2024-12-08 06:32:17.726900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.745 [2024-12-08 06:32:17.726930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.745 qpair failed and we were unable to recover it. 00:28:27.745 [2024-12-08 06:32:17.736831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.745 [2024-12-08 06:32:17.736917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.745 [2024-12-08 06:32:17.736942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.745 [2024-12-08 06:32:17.736956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.745 [2024-12-08 06:32:17.736969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.745 [2024-12-08 06:32:17.737012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.745 qpair failed and we were unable to recover it. 00:28:27.745 [2024-12-08 06:32:17.746829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.745 [2024-12-08 06:32:17.746934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.745 [2024-12-08 06:32:17.746966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.745 [2024-12-08 06:32:17.746982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.745 [2024-12-08 06:32:17.746995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.745 [2024-12-08 06:32:17.747040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.745 qpair failed and we were unable to recover it. 00:28:27.745 [2024-12-08 06:32:17.756820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.745 [2024-12-08 06:32:17.756918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.745 [2024-12-08 06:32:17.756943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.745 [2024-12-08 06:32:17.756958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.746 [2024-12-08 06:32:17.756971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.746 [2024-12-08 06:32:17.757000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.746 qpair failed and we were unable to recover it. 00:28:27.746 [2024-12-08 06:32:17.766927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.746 [2024-12-08 06:32:17.767045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.746 [2024-12-08 06:32:17.767071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.746 [2024-12-08 06:32:17.767086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.746 [2024-12-08 06:32:17.767098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.746 [2024-12-08 06:32:17.767128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.746 qpair failed and we were unable to recover it. 00:28:27.746 [2024-12-08 06:32:17.776912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.746 [2024-12-08 06:32:17.776997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.746 [2024-12-08 06:32:17.777022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.746 [2024-12-08 06:32:17.777036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.746 [2024-12-08 06:32:17.777049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.746 [2024-12-08 06:32:17.777095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.746 qpair failed and we were unable to recover it. 00:28:27.746 [2024-12-08 06:32:17.786963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.746 [2024-12-08 06:32:17.787070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.746 [2024-12-08 06:32:17.787095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.746 [2024-12-08 06:32:17.787114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.746 [2024-12-08 06:32:17.787127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.746 [2024-12-08 06:32:17.787157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.746 qpair failed and we were unable to recover it. 00:28:27.746 [2024-12-08 06:32:17.796994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.746 [2024-12-08 06:32:17.797094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.746 [2024-12-08 06:32:17.797119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.746 [2024-12-08 06:32:17.797133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.746 [2024-12-08 06:32:17.797145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.746 [2024-12-08 06:32:17.797174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.746 qpair failed and we were unable to recover it. 00:28:27.746 [2024-12-08 06:32:17.806994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.746 [2024-12-08 06:32:17.807091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.746 [2024-12-08 06:32:17.807115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.746 [2024-12-08 06:32:17.807129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.746 [2024-12-08 06:32:17.807140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.746 [2024-12-08 06:32:17.807171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.746 qpair failed and we were unable to recover it. 00:28:27.746 [2024-12-08 06:32:17.817016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.746 [2024-12-08 06:32:17.817123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.746 [2024-12-08 06:32:17.817149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.746 [2024-12-08 06:32:17.817163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.746 [2024-12-08 06:32:17.817175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.746 [2024-12-08 06:32:17.817204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.746 qpair failed and we were unable to recover it. 00:28:27.746 [2024-12-08 06:32:17.827066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.746 [2024-12-08 06:32:17.827205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.746 [2024-12-08 06:32:17.827231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.746 [2024-12-08 06:32:17.827246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.746 [2024-12-08 06:32:17.827259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.746 [2024-12-08 06:32:17.827296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.746 qpair failed and we were unable to recover it. 00:28:27.746 [2024-12-08 06:32:17.837076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.746 [2024-12-08 06:32:17.837162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.746 [2024-12-08 06:32:17.837188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.746 [2024-12-08 06:32:17.837202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.746 [2024-12-08 06:32:17.837215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.746 [2024-12-08 06:32:17.837245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.746 qpair failed and we were unable to recover it. 00:28:27.746 [2024-12-08 06:32:17.847107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.746 [2024-12-08 06:32:17.847189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.746 [2024-12-08 06:32:17.847213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.746 [2024-12-08 06:32:17.847227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.746 [2024-12-08 06:32:17.847240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.746 [2024-12-08 06:32:17.847269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.746 qpair failed and we were unable to recover it. 00:28:27.746 [2024-12-08 06:32:17.857102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:27.746 [2024-12-08 06:32:17.857183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:27.746 [2024-12-08 06:32:17.857207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:27.746 [2024-12-08 06:32:17.857221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:27.746 [2024-12-08 06:32:17.857234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:27.746 [2024-12-08 06:32:17.857263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:27.746 qpair failed and we were unable to recover it. 00:28:28.004 [2024-12-08 06:32:17.867154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.004 [2024-12-08 06:32:17.867288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.004 [2024-12-08 06:32:17.867314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.004 [2024-12-08 06:32:17.867330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.004 [2024-12-08 06:32:17.867342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.004 [2024-12-08 06:32:17.867372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-12-08 06:32:17.877171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.004 [2024-12-08 06:32:17.877260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.004 [2024-12-08 06:32:17.877284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.004 [2024-12-08 06:32:17.877298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.004 [2024-12-08 06:32:17.877310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.004 [2024-12-08 06:32:17.877339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-12-08 06:32:17.887210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.004 [2024-12-08 06:32:17.887330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.004 [2024-12-08 06:32:17.887357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.004 [2024-12-08 06:32:17.887371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.004 [2024-12-08 06:32:17.887383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.004 [2024-12-08 06:32:17.887412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-12-08 06:32:17.897237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.004 [2024-12-08 06:32:17.897355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.004 [2024-12-08 06:32:17.897381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.004 [2024-12-08 06:32:17.897396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.004 [2024-12-08 06:32:17.897408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.004 [2024-12-08 06:32:17.897437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-12-08 06:32:17.907288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.004 [2024-12-08 06:32:17.907415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.004 [2024-12-08 06:32:17.907441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.004 [2024-12-08 06:32:17.907456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.004 [2024-12-08 06:32:17.907467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.004 [2024-12-08 06:32:17.907497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-12-08 06:32:17.917268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.004 [2024-12-08 06:32:17.917369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.004 [2024-12-08 06:32:17.917395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.004 [2024-12-08 06:32:17.917415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.004 [2024-12-08 06:32:17.917428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.004 [2024-12-08 06:32:17.917457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-12-08 06:32:17.927299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.004 [2024-12-08 06:32:17.927383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.004 [2024-12-08 06:32:17.927407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.004 [2024-12-08 06:32:17.927420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.004 [2024-12-08 06:32:17.927433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.004 [2024-12-08 06:32:17.927462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-12-08 06:32:17.937338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.004 [2024-12-08 06:32:17.937422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.004 [2024-12-08 06:32:17.937446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.004 [2024-12-08 06:32:17.937460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.005 [2024-12-08 06:32:17.937472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.005 [2024-12-08 06:32:17.937502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-12-08 06:32:17.947367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.005 [2024-12-08 06:32:17.947474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.005 [2024-12-08 06:32:17.947499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.005 [2024-12-08 06:32:17.947513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.005 [2024-12-08 06:32:17.947526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.005 [2024-12-08 06:32:17.947556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-12-08 06:32:17.957415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.005 [2024-12-08 06:32:17.957542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.005 [2024-12-08 06:32:17.957568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.005 [2024-12-08 06:32:17.957583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.005 [2024-12-08 06:32:17.957595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.005 [2024-12-08 06:32:17.957629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-12-08 06:32:17.967419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.005 [2024-12-08 06:32:17.967505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.005 [2024-12-08 06:32:17.967529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.005 [2024-12-08 06:32:17.967542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.005 [2024-12-08 06:32:17.967555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.005 [2024-12-08 06:32:17.967584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-12-08 06:32:17.977447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.005 [2024-12-08 06:32:17.977531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.005 [2024-12-08 06:32:17.977555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.005 [2024-12-08 06:32:17.977568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.005 [2024-12-08 06:32:17.977580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.005 [2024-12-08 06:32:17.977609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-12-08 06:32:17.987501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.005 [2024-12-08 06:32:17.987609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.005 [2024-12-08 06:32:17.987634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.005 [2024-12-08 06:32:17.987649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.005 [2024-12-08 06:32:17.987661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.005 [2024-12-08 06:32:17.987691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-12-08 06:32:17.997500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.005 [2024-12-08 06:32:17.997588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.005 [2024-12-08 06:32:17.997612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.005 [2024-12-08 06:32:17.997626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.005 [2024-12-08 06:32:17.997639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.005 [2024-12-08 06:32:17.997668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-12-08 06:32:18.007516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.005 [2024-12-08 06:32:18.007603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.005 [2024-12-08 06:32:18.007627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.005 [2024-12-08 06:32:18.007642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.005 [2024-12-08 06:32:18.007654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.005 [2024-12-08 06:32:18.007683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-12-08 06:32:18.017537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.005 [2024-12-08 06:32:18.017632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.005 [2024-12-08 06:32:18.017658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.005 [2024-12-08 06:32:18.017672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.005 [2024-12-08 06:32:18.017685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.005 [2024-12-08 06:32:18.017737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-12-08 06:32:18.027633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.005 [2024-12-08 06:32:18.027747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.005 [2024-12-08 06:32:18.027772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.005 [2024-12-08 06:32:18.027787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.005 [2024-12-08 06:32:18.027800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.005 [2024-12-08 06:32:18.027831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-12-08 06:32:18.037642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.005 [2024-12-08 06:32:18.037768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.005 [2024-12-08 06:32:18.037795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.005 [2024-12-08 06:32:18.037810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.005 [2024-12-08 06:32:18.037823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.005 [2024-12-08 06:32:18.037854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-12-08 06:32:18.047638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.005 [2024-12-08 06:32:18.047752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.005 [2024-12-08 06:32:18.047784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.005 [2024-12-08 06:32:18.047800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.005 [2024-12-08 06:32:18.047813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.005 [2024-12-08 06:32:18.047844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-12-08 06:32:18.057730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.005 [2024-12-08 06:32:18.057819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.005 [2024-12-08 06:32:18.057846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.005 [2024-12-08 06:32:18.057860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.005 [2024-12-08 06:32:18.057873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.005 [2024-12-08 06:32:18.057903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-12-08 06:32:18.067832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.006 [2024-12-08 06:32:18.067927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.006 [2024-12-08 06:32:18.067954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.006 [2024-12-08 06:32:18.067969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.006 [2024-12-08 06:32:18.067981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.006 [2024-12-08 06:32:18.068012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-12-08 06:32:18.077794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.006 [2024-12-08 06:32:18.077893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.006 [2024-12-08 06:32:18.077917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.006 [2024-12-08 06:32:18.077932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.006 [2024-12-08 06:32:18.077945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.006 [2024-12-08 06:32:18.077975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-12-08 06:32:18.087851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.006 [2024-12-08 06:32:18.087953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.006 [2024-12-08 06:32:18.087979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.006 [2024-12-08 06:32:18.087995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.006 [2024-12-08 06:32:18.088028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.006 [2024-12-08 06:32:18.088060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-12-08 06:32:18.097834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.006 [2024-12-08 06:32:18.097920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.006 [2024-12-08 06:32:18.097947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.006 [2024-12-08 06:32:18.097962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.006 [2024-12-08 06:32:18.097975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.006 [2024-12-08 06:32:18.098005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-12-08 06:32:18.107864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.006 [2024-12-08 06:32:18.107973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.006 [2024-12-08 06:32:18.108014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.006 [2024-12-08 06:32:18.108030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.006 [2024-12-08 06:32:18.108043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.006 [2024-12-08 06:32:18.108072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-12-08 06:32:18.117854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.006 [2024-12-08 06:32:18.117948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.006 [2024-12-08 06:32:18.117973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.006 [2024-12-08 06:32:18.117987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.006 [2024-12-08 06:32:18.117999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.006 [2024-12-08 06:32:18.118044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.266 [2024-12-08 06:32:18.127906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.266 [2024-12-08 06:32:18.127988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.266 [2024-12-08 06:32:18.128013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.266 [2024-12-08 06:32:18.128027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.266 [2024-12-08 06:32:18.128040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.266 [2024-12-08 06:32:18.128071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.266 qpair failed and we were unable to recover it. 00:28:28.266 [2024-12-08 06:32:18.137965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.266 [2024-12-08 06:32:18.138101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.266 [2024-12-08 06:32:18.138127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.266 [2024-12-08 06:32:18.138141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.266 [2024-12-08 06:32:18.138154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.266 [2024-12-08 06:32:18.138193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.266 qpair failed and we were unable to recover it. 00:28:28.266 [2024-12-08 06:32:18.148030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.266 [2024-12-08 06:32:18.148174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.266 [2024-12-08 06:32:18.148199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.266 [2024-12-08 06:32:18.148214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.266 [2024-12-08 06:32:18.148227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.266 [2024-12-08 06:32:18.148256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.266 qpair failed and we were unable to recover it. 00:28:28.266 [2024-12-08 06:32:18.157989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.266 [2024-12-08 06:32:18.158099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.266 [2024-12-08 06:32:18.158123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.266 [2024-12-08 06:32:18.158136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.266 [2024-12-08 06:32:18.158149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.266 [2024-12-08 06:32:18.158178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.266 qpair failed and we were unable to recover it. 00:28:28.266 [2024-12-08 06:32:18.168006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.266 [2024-12-08 06:32:18.168111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.266 [2024-12-08 06:32:18.168135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.266 [2024-12-08 06:32:18.168149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.266 [2024-12-08 06:32:18.168161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.266 [2024-12-08 06:32:18.168191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.266 qpair failed and we were unable to recover it. 00:28:28.266 [2024-12-08 06:32:18.178108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.266 [2024-12-08 06:32:18.178217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.266 [2024-12-08 06:32:18.178248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.266 [2024-12-08 06:32:18.178263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.266 [2024-12-08 06:32:18.178275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.266 [2024-12-08 06:32:18.178305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.266 qpair failed and we were unable to recover it. 00:28:28.266 [2024-12-08 06:32:18.188158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.266 [2024-12-08 06:32:18.188247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.266 [2024-12-08 06:32:18.188273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.266 [2024-12-08 06:32:18.188288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.266 [2024-12-08 06:32:18.188300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.266 [2024-12-08 06:32:18.188329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.266 qpair failed and we were unable to recover it. 00:28:28.266 [2024-12-08 06:32:18.198139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.266 [2024-12-08 06:32:18.198232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.266 [2024-12-08 06:32:18.198256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.266 [2024-12-08 06:32:18.198270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.266 [2024-12-08 06:32:18.198283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.266 [2024-12-08 06:32:18.198311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.266 qpair failed and we were unable to recover it. 00:28:28.266 [2024-12-08 06:32:18.208119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.266 [2024-12-08 06:32:18.208259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.266 [2024-12-08 06:32:18.208284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.266 [2024-12-08 06:32:18.208299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.266 [2024-12-08 06:32:18.208311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.266 [2024-12-08 06:32:18.208351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.266 qpair failed and we were unable to recover it. 00:28:28.266 [2024-12-08 06:32:18.218208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.266 [2024-12-08 06:32:18.218293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.266 [2024-12-08 06:32:18.218317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.266 [2024-12-08 06:32:18.218331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.266 [2024-12-08 06:32:18.218349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.266 [2024-12-08 06:32:18.218379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.266 qpair failed and we were unable to recover it. 00:28:28.266 [2024-12-08 06:32:18.228263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.266 [2024-12-08 06:32:18.228360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.266 [2024-12-08 06:32:18.228386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.266 [2024-12-08 06:32:18.228400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.266 [2024-12-08 06:32:18.228413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.266 [2024-12-08 06:32:18.228442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.266 qpair failed and we were unable to recover it. 00:28:28.266 [2024-12-08 06:32:18.238248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.266 [2024-12-08 06:32:18.238333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.266 [2024-12-08 06:32:18.238357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.266 [2024-12-08 06:32:18.238371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.266 [2024-12-08 06:32:18.238384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.266 [2024-12-08 06:32:18.238415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.267 qpair failed and we were unable to recover it. 00:28:28.267 [2024-12-08 06:32:18.248278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.267 [2024-12-08 06:32:18.248360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.267 [2024-12-08 06:32:18.248384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.267 [2024-12-08 06:32:18.248398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.267 [2024-12-08 06:32:18.248410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.267 [2024-12-08 06:32:18.248440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.267 qpair failed and we were unable to recover it. 00:28:28.267 [2024-12-08 06:32:18.258297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.267 [2024-12-08 06:32:18.258383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.267 [2024-12-08 06:32:18.258407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.267 [2024-12-08 06:32:18.258421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.267 [2024-12-08 06:32:18.258433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.267 [2024-12-08 06:32:18.258463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.267 qpair failed and we were unable to recover it. 00:28:28.267 [2024-12-08 06:32:18.268360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.267 [2024-12-08 06:32:18.268486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.267 [2024-12-08 06:32:18.268510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.267 [2024-12-08 06:32:18.268524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.267 [2024-12-08 06:32:18.268537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.267 [2024-12-08 06:32:18.268566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.267 qpair failed and we were unable to recover it. 00:28:28.267 [2024-12-08 06:32:18.278350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.267 [2024-12-08 06:32:18.278488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.267 [2024-12-08 06:32:18.278514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.267 [2024-12-08 06:32:18.278528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.267 [2024-12-08 06:32:18.278541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.267 [2024-12-08 06:32:18.278571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.267 qpair failed and we were unable to recover it. 00:28:28.267 [2024-12-08 06:32:18.288371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.267 [2024-12-08 06:32:18.288459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.267 [2024-12-08 06:32:18.288483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.267 [2024-12-08 06:32:18.288498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.267 [2024-12-08 06:32:18.288510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.267 [2024-12-08 06:32:18.288540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.267 qpair failed and we were unable to recover it. 00:28:28.267 [2024-12-08 06:32:18.298409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.267 [2024-12-08 06:32:18.298495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.267 [2024-12-08 06:32:18.298519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.267 [2024-12-08 06:32:18.298533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.267 [2024-12-08 06:32:18.298545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.267 [2024-12-08 06:32:18.298574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.267 qpair failed and we were unable to recover it. 00:28:28.267 [2024-12-08 06:32:18.308435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.267 [2024-12-08 06:32:18.308536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.267 [2024-12-08 06:32:18.308566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.267 [2024-12-08 06:32:18.308581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.267 [2024-12-08 06:32:18.308594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.267 [2024-12-08 06:32:18.308623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.267 qpair failed and we were unable to recover it. 00:28:28.267 [2024-12-08 06:32:18.318444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.267 [2024-12-08 06:32:18.318528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.267 [2024-12-08 06:32:18.318553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.267 [2024-12-08 06:32:18.318567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.267 [2024-12-08 06:32:18.318580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.267 [2024-12-08 06:32:18.318609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.267 qpair failed and we were unable to recover it. 00:28:28.267 [2024-12-08 06:32:18.328459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.267 [2024-12-08 06:32:18.328633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.267 [2024-12-08 06:32:18.328659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.267 [2024-12-08 06:32:18.328674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.267 [2024-12-08 06:32:18.328687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.267 [2024-12-08 06:32:18.328718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.267 qpair failed and we were unable to recover it. 00:28:28.267 [2024-12-08 06:32:18.338501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.267 [2024-12-08 06:32:18.338606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.267 [2024-12-08 06:32:18.338631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.267 [2024-12-08 06:32:18.338645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.267 [2024-12-08 06:32:18.338659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.267 [2024-12-08 06:32:18.338689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.267 qpair failed and we were unable to recover it. 00:28:28.267 [2024-12-08 06:32:18.348564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.267 [2024-12-08 06:32:18.348685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.267 [2024-12-08 06:32:18.348733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.267 [2024-12-08 06:32:18.348755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.267 [2024-12-08 06:32:18.348769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.267 [2024-12-08 06:32:18.348801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.267 qpair failed and we were unable to recover it. 00:28:28.267 [2024-12-08 06:32:18.358560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.267 [2024-12-08 06:32:18.358648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.267 [2024-12-08 06:32:18.358672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.267 [2024-12-08 06:32:18.358686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.267 [2024-12-08 06:32:18.358699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.267 [2024-12-08 06:32:18.358756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.267 qpair failed and we were unable to recover it. 00:28:28.267 [2024-12-08 06:32:18.368623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.267 [2024-12-08 06:32:18.368728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.267 [2024-12-08 06:32:18.368754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.267 [2024-12-08 06:32:18.368769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.267 [2024-12-08 06:32:18.368782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.267 [2024-12-08 06:32:18.368813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.268 qpair failed and we were unable to recover it. 00:28:28.268 [2024-12-08 06:32:18.378626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.268 [2024-12-08 06:32:18.378718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.268 [2024-12-08 06:32:18.378753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.268 [2024-12-08 06:32:18.378768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.268 [2024-12-08 06:32:18.378781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.268 [2024-12-08 06:32:18.378812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.268 qpair failed and we were unable to recover it. 00:28:28.529 [2024-12-08 06:32:18.388695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.529 [2024-12-08 06:32:18.388839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.529 [2024-12-08 06:32:18.388864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.529 [2024-12-08 06:32:18.388879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.529 [2024-12-08 06:32:18.388892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.529 [2024-12-08 06:32:18.388932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.529 qpair failed and we were unable to recover it. 00:28:28.529 [2024-12-08 06:32:18.398694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.529 [2024-12-08 06:32:18.398835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.529 [2024-12-08 06:32:18.398860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.529 [2024-12-08 06:32:18.398876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.529 [2024-12-08 06:32:18.398889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.529 [2024-12-08 06:32:18.398920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.529 qpair failed and we were unable to recover it. 00:28:28.529 [2024-12-08 06:32:18.408682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.529 [2024-12-08 06:32:18.408798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.529 [2024-12-08 06:32:18.408824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.529 [2024-12-08 06:32:18.408839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.529 [2024-12-08 06:32:18.408851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.529 [2024-12-08 06:32:18.408881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.529 qpair failed and we were unable to recover it. 00:28:28.529 [2024-12-08 06:32:18.418742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.529 [2024-12-08 06:32:18.418880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.529 [2024-12-08 06:32:18.418905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.529 [2024-12-08 06:32:18.418920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.529 [2024-12-08 06:32:18.418933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.529 [2024-12-08 06:32:18.418964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.529 qpair failed and we were unable to recover it. 00:28:28.529 [2024-12-08 06:32:18.428815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.529 [2024-12-08 06:32:18.428913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.529 [2024-12-08 06:32:18.428938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.529 [2024-12-08 06:32:18.428953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.529 [2024-12-08 06:32:18.428966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.529 [2024-12-08 06:32:18.428997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.529 qpair failed and we were unable to recover it. 00:28:28.529 [2024-12-08 06:32:18.438792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.529 [2024-12-08 06:32:18.438886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.529 [2024-12-08 06:32:18.438911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.529 [2024-12-08 06:32:18.438926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.529 [2024-12-08 06:32:18.438939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.529 [2024-12-08 06:32:18.438969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.529 qpair failed and we were unable to recover it. 00:28:28.529 [2024-12-08 06:32:18.448825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.529 [2024-12-08 06:32:18.448918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.529 [2024-12-08 06:32:18.448942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.529 [2024-12-08 06:32:18.448957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.529 [2024-12-08 06:32:18.448970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.529 [2024-12-08 06:32:18.449002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.529 qpair failed and we were unable to recover it. 00:28:28.529 [2024-12-08 06:32:18.458840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.529 [2024-12-08 06:32:18.458931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.529 [2024-12-08 06:32:18.458955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.529 [2024-12-08 06:32:18.458970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.529 [2024-12-08 06:32:18.458982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.529 [2024-12-08 06:32:18.459028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.529 qpair failed and we were unable to recover it. 00:28:28.529 [2024-12-08 06:32:18.468915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.529 [2024-12-08 06:32:18.469024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.530 [2024-12-08 06:32:18.469048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.530 [2024-12-08 06:32:18.469063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.530 [2024-12-08 06:32:18.469076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.530 [2024-12-08 06:32:18.469105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.530 qpair failed and we were unable to recover it. 00:28:28.530 [2024-12-08 06:32:18.478924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.530 [2024-12-08 06:32:18.479076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.530 [2024-12-08 06:32:18.479101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.530 [2024-12-08 06:32:18.479120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.530 [2024-12-08 06:32:18.479133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.530 [2024-12-08 06:32:18.479163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.530 qpair failed and we were unable to recover it. 00:28:28.530 [2024-12-08 06:32:18.488923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.530 [2024-12-08 06:32:18.489009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.530 [2024-12-08 06:32:18.489033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.530 [2024-12-08 06:32:18.489049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.530 [2024-12-08 06:32:18.489062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.530 [2024-12-08 06:32:18.489107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.530 qpair failed and we were unable to recover it. 00:28:28.530 [2024-12-08 06:32:18.498921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.530 [2024-12-08 06:32:18.499073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.530 [2024-12-08 06:32:18.499099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.530 [2024-12-08 06:32:18.499114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.530 [2024-12-08 06:32:18.499127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.530 [2024-12-08 06:32:18.499156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.530 qpair failed and we were unable to recover it. 00:28:28.530 [2024-12-08 06:32:18.508968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.530 [2024-12-08 06:32:18.509071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.530 [2024-12-08 06:32:18.509095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.530 [2024-12-08 06:32:18.509109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.530 [2024-12-08 06:32:18.509121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.530 [2024-12-08 06:32:18.509150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.530 qpair failed and we were unable to recover it. 00:28:28.530 [2024-12-08 06:32:18.519036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.530 [2024-12-08 06:32:18.519160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.530 [2024-12-08 06:32:18.519186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.530 [2024-12-08 06:32:18.519209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.530 [2024-12-08 06:32:18.519222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.530 [2024-12-08 06:32:18.519259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.530 qpair failed and we were unable to recover it. 00:28:28.530 [2024-12-08 06:32:18.529076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.530 [2024-12-08 06:32:18.529158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.530 [2024-12-08 06:32:18.529183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.530 [2024-12-08 06:32:18.529197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.530 [2024-12-08 06:32:18.529211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.530 [2024-12-08 06:32:18.529241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.530 qpair failed and we were unable to recover it. 00:28:28.530 [2024-12-08 06:32:18.539138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.530 [2024-12-08 06:32:18.539249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.530 [2024-12-08 06:32:18.539274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.530 [2024-12-08 06:32:18.539288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.530 [2024-12-08 06:32:18.539301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.530 [2024-12-08 06:32:18.539330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.530 qpair failed and we were unable to recover it. 00:28:28.530 [2024-12-08 06:32:18.549139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.530 [2024-12-08 06:32:18.549230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.530 [2024-12-08 06:32:18.549254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.530 [2024-12-08 06:32:18.549268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.530 [2024-12-08 06:32:18.549281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.530 [2024-12-08 06:32:18.549310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.530 qpair failed and we were unable to recover it. 00:28:28.530 [2024-12-08 06:32:18.559115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.530 [2024-12-08 06:32:18.559202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.530 [2024-12-08 06:32:18.559227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.530 [2024-12-08 06:32:18.559241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.530 [2024-12-08 06:32:18.559254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.530 [2024-12-08 06:32:18.559283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.530 qpair failed and we were unable to recover it. 00:28:28.530 [2024-12-08 06:32:18.569200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.530 [2024-12-08 06:32:18.569285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.530 [2024-12-08 06:32:18.569309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.530 [2024-12-08 06:32:18.569323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.530 [2024-12-08 06:32:18.569337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.530 [2024-12-08 06:32:18.569366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.530 qpair failed and we were unable to recover it. 00:28:28.530 [2024-12-08 06:32:18.579230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.530 [2024-12-08 06:32:18.579354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.530 [2024-12-08 06:32:18.579394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.530 [2024-12-08 06:32:18.579409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.530 [2024-12-08 06:32:18.579422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.530 [2024-12-08 06:32:18.579453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.530 qpair failed and we were unable to recover it. 00:28:28.530 [2024-12-08 06:32:18.589204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.530 [2024-12-08 06:32:18.589311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.530 [2024-12-08 06:32:18.589336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.530 [2024-12-08 06:32:18.589351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.530 [2024-12-08 06:32:18.589363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.530 [2024-12-08 06:32:18.589393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.530 qpair failed and we were unable to recover it. 00:28:28.530 [2024-12-08 06:32:18.599302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.530 [2024-12-08 06:32:18.599425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.530 [2024-12-08 06:32:18.599450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.530 [2024-12-08 06:32:18.599465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.530 [2024-12-08 06:32:18.599478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.530 [2024-12-08 06:32:18.599508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.530 qpair failed and we were unable to recover it. 00:28:28.530 [2024-12-08 06:32:18.609283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.530 [2024-12-08 06:32:18.609370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.530 [2024-12-08 06:32:18.609399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.530 [2024-12-08 06:32:18.609414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.530 [2024-12-08 06:32:18.609426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.530 [2024-12-08 06:32:18.609456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.530 qpair failed and we were unable to recover it. 00:28:28.530 [2024-12-08 06:32:18.619328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.530 [2024-12-08 06:32:18.619415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.530 [2024-12-08 06:32:18.619439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.530 [2024-12-08 06:32:18.619453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.530 [2024-12-08 06:32:18.619465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.530 [2024-12-08 06:32:18.619494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.530 qpair failed and we were unable to recover it. 00:28:28.530 [2024-12-08 06:32:18.629365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.530 [2024-12-08 06:32:18.629459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.530 [2024-12-08 06:32:18.629483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.530 [2024-12-08 06:32:18.629497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.530 [2024-12-08 06:32:18.629510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.530 [2024-12-08 06:32:18.629541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.530 qpair failed and we were unable to recover it. 00:28:28.530 [2024-12-08 06:32:18.639345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.530 [2024-12-08 06:32:18.639429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.530 [2024-12-08 06:32:18.639453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.530 [2024-12-08 06:32:18.639467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.530 [2024-12-08 06:32:18.639480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.530 [2024-12-08 06:32:18.639510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.530 qpair failed and we were unable to recover it. 00:28:28.793 [2024-12-08 06:32:18.649408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.793 [2024-12-08 06:32:18.649499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.793 [2024-12-08 06:32:18.649524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.793 [2024-12-08 06:32:18.649538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.793 [2024-12-08 06:32:18.649557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.793 [2024-12-08 06:32:18.649587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.793 qpair failed and we were unable to recover it. 00:28:28.793 [2024-12-08 06:32:18.659495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.793 [2024-12-08 06:32:18.659597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.793 [2024-12-08 06:32:18.659621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.793 [2024-12-08 06:32:18.659635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.793 [2024-12-08 06:32:18.659648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.793 [2024-12-08 06:32:18.659678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.793 qpair failed and we were unable to recover it. 00:28:28.793 [2024-12-08 06:32:18.669493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.793 [2024-12-08 06:32:18.669616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.793 [2024-12-08 06:32:18.669640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.793 [2024-12-08 06:32:18.669654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.793 [2024-12-08 06:32:18.669667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.793 [2024-12-08 06:32:18.669697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.793 qpair failed and we were unable to recover it. 00:28:28.793 [2024-12-08 06:32:18.679486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.793 [2024-12-08 06:32:18.679575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.793 [2024-12-08 06:32:18.679600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.793 [2024-12-08 06:32:18.679614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.793 [2024-12-08 06:32:18.679627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.793 [2024-12-08 06:32:18.679656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.793 qpair failed and we were unable to recover it. 00:28:28.793 [2024-12-08 06:32:18.689526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.793 [2024-12-08 06:32:18.689615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.793 [2024-12-08 06:32:18.689639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.793 [2024-12-08 06:32:18.689654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.793 [2024-12-08 06:32:18.689666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.793 [2024-12-08 06:32:18.689695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.793 qpair failed and we were unable to recover it. 00:28:28.793 [2024-12-08 06:32:18.699540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.793 [2024-12-08 06:32:18.699624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.793 [2024-12-08 06:32:18.699648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.793 [2024-12-08 06:32:18.699663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.793 [2024-12-08 06:32:18.699675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.793 [2024-12-08 06:32:18.699718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.793 qpair failed and we were unable to recover it. 00:28:28.793 [2024-12-08 06:32:18.709603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.793 [2024-12-08 06:32:18.709710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.793 [2024-12-08 06:32:18.709748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.793 [2024-12-08 06:32:18.709764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.793 [2024-12-08 06:32:18.709776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.793 [2024-12-08 06:32:18.709807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.793 qpair failed and we were unable to recover it. 00:28:28.793 [2024-12-08 06:32:18.719605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.793 [2024-12-08 06:32:18.719689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.793 [2024-12-08 06:32:18.719737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.793 [2024-12-08 06:32:18.719753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.793 [2024-12-08 06:32:18.719765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.793 [2024-12-08 06:32:18.719796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.793 qpair failed and we were unable to recover it. 00:28:28.793 [2024-12-08 06:32:18.729631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.793 [2024-12-08 06:32:18.729776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.793 [2024-12-08 06:32:18.729802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.793 [2024-12-08 06:32:18.729817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.793 [2024-12-08 06:32:18.729829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.793 [2024-12-08 06:32:18.729860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.793 qpair failed and we were unable to recover it. 00:28:28.793 [2024-12-08 06:32:18.739651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.793 [2024-12-08 06:32:18.739758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.793 [2024-12-08 06:32:18.739789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.793 [2024-12-08 06:32:18.739805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.793 [2024-12-08 06:32:18.739818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.793 [2024-12-08 06:32:18.739849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.793 qpair failed and we were unable to recover it. 00:28:28.793 [2024-12-08 06:32:18.749728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.793 [2024-12-08 06:32:18.749822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.793 [2024-12-08 06:32:18.749847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.793 [2024-12-08 06:32:18.749862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.793 [2024-12-08 06:32:18.749874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.793 [2024-12-08 06:32:18.749905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.793 qpair failed and we were unable to recover it. 00:28:28.793 [2024-12-08 06:32:18.759760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.793 [2024-12-08 06:32:18.759856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.793 [2024-12-08 06:32:18.759880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.793 [2024-12-08 06:32:18.759895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.794 [2024-12-08 06:32:18.759908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.794 [2024-12-08 06:32:18.759939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.794 qpair failed and we were unable to recover it. 00:28:28.794 [2024-12-08 06:32:18.769767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.794 [2024-12-08 06:32:18.769853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.794 [2024-12-08 06:32:18.769878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.794 [2024-12-08 06:32:18.769892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.794 [2024-12-08 06:32:18.769905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.794 [2024-12-08 06:32:18.769936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.794 qpair failed and we were unable to recover it. 00:28:28.794 [2024-12-08 06:32:18.779776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.794 [2024-12-08 06:32:18.779867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.794 [2024-12-08 06:32:18.779892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.794 [2024-12-08 06:32:18.779907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.794 [2024-12-08 06:32:18.779925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.794 [2024-12-08 06:32:18.779957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.794 qpair failed and we were unable to recover it. 00:28:28.794 [2024-12-08 06:32:18.789823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.794 [2024-12-08 06:32:18.789934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.794 [2024-12-08 06:32:18.789959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.794 [2024-12-08 06:32:18.789974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.794 [2024-12-08 06:32:18.789987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.794 [2024-12-08 06:32:18.790032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.794 qpair failed and we were unable to recover it. 00:28:28.794 [2024-12-08 06:32:18.799966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.794 [2024-12-08 06:32:18.800088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.794 [2024-12-08 06:32:18.800112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.794 [2024-12-08 06:32:18.800127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.794 [2024-12-08 06:32:18.800139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.794 [2024-12-08 06:32:18.800169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.794 qpair failed and we were unable to recover it. 00:28:28.794 [2024-12-08 06:32:18.809916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.794 [2024-12-08 06:32:18.810026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.794 [2024-12-08 06:32:18.810051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.794 [2024-12-08 06:32:18.810065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.794 [2024-12-08 06:32:18.810078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.794 [2024-12-08 06:32:18.810108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.794 qpair failed and we were unable to recover it. 00:28:28.794 [2024-12-08 06:32:18.819941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.794 [2024-12-08 06:32:18.820046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.794 [2024-12-08 06:32:18.820070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.794 [2024-12-08 06:32:18.820084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.794 [2024-12-08 06:32:18.820097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.794 [2024-12-08 06:32:18.820126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.794 qpair failed and we were unable to recover it. 00:28:28.794 [2024-12-08 06:32:18.829984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.794 [2024-12-08 06:32:18.830094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.794 [2024-12-08 06:32:18.830119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.794 [2024-12-08 06:32:18.830133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.794 [2024-12-08 06:32:18.830161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.794 [2024-12-08 06:32:18.830193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.794 qpair failed and we were unable to recover it. 00:28:28.794 [2024-12-08 06:32:18.839956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.794 [2024-12-08 06:32:18.840072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.794 [2024-12-08 06:32:18.840096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.794 [2024-12-08 06:32:18.840110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.794 [2024-12-08 06:32:18.840123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.794 [2024-12-08 06:32:18.840153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.794 qpair failed and we were unable to recover it. 00:28:28.794 [2024-12-08 06:32:18.849972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.794 [2024-12-08 06:32:18.850062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.794 [2024-12-08 06:32:18.850101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.794 [2024-12-08 06:32:18.850115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.794 [2024-12-08 06:32:18.850128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.794 [2024-12-08 06:32:18.850157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.794 qpair failed and we were unable to recover it. 00:28:28.794 [2024-12-08 06:32:18.860021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.794 [2024-12-08 06:32:18.860122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.794 [2024-12-08 06:32:18.860146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.794 [2024-12-08 06:32:18.860160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.794 [2024-12-08 06:32:18.860172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.794 [2024-12-08 06:32:18.860203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.794 qpair failed and we were unable to recover it. 00:28:28.794 [2024-12-08 06:32:18.870052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.794 [2024-12-08 06:32:18.870142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.794 [2024-12-08 06:32:18.870170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.794 [2024-12-08 06:32:18.870185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.794 [2024-12-08 06:32:18.870198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.794 [2024-12-08 06:32:18.870227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.794 qpair failed and we were unable to recover it. 00:28:28.794 [2024-12-08 06:32:18.880093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.794 [2024-12-08 06:32:18.880182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.794 [2024-12-08 06:32:18.880206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.794 [2024-12-08 06:32:18.880221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.794 [2024-12-08 06:32:18.880233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.794 [2024-12-08 06:32:18.880263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.794 qpair failed and we were unable to recover it. 00:28:28.794 [2024-12-08 06:32:18.890143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.794 [2024-12-08 06:32:18.890258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.794 [2024-12-08 06:32:18.890283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.794 [2024-12-08 06:32:18.890298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.794 [2024-12-08 06:32:18.890310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.794 [2024-12-08 06:32:18.890339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.794 qpair failed and we were unable to recover it. 00:28:28.794 [2024-12-08 06:32:18.900144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.794 [2024-12-08 06:32:18.900227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.794 [2024-12-08 06:32:18.900251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.794 [2024-12-08 06:32:18.900266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.794 [2024-12-08 06:32:18.900278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.794 [2024-12-08 06:32:18.900308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.794 qpair failed and we were unable to recover it. 00:28:28.794 [2024-12-08 06:32:18.910172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.794 [2024-12-08 06:32:18.910266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.794 [2024-12-08 06:32:18.910291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.794 [2024-12-08 06:32:18.910312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.794 [2024-12-08 06:32:18.910326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:28.794 [2024-12-08 06:32:18.910357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:28.794 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-08 06:32:18.920206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.054 [2024-12-08 06:32:18.920329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.054 [2024-12-08 06:32:18.920354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.054 [2024-12-08 06:32:18.920369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.054 [2024-12-08 06:32:18.920382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.054 [2024-12-08 06:32:18.920411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-08 06:32:18.930224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.054 [2024-12-08 06:32:18.930345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.054 [2024-12-08 06:32:18.930369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.054 [2024-12-08 06:32:18.930383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.054 [2024-12-08 06:32:18.930396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.054 [2024-12-08 06:32:18.930426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-08 06:32:18.940239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.054 [2024-12-08 06:32:18.940326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.054 [2024-12-08 06:32:18.940350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.054 [2024-12-08 06:32:18.940365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.054 [2024-12-08 06:32:18.940377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.054 [2024-12-08 06:32:18.940407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-08 06:32:18.950288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.054 [2024-12-08 06:32:18.950393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.054 [2024-12-08 06:32:18.950418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.054 [2024-12-08 06:32:18.950432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.054 [2024-12-08 06:32:18.950445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.054 [2024-12-08 06:32:18.950480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-08 06:32:18.960303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.054 [2024-12-08 06:32:18.960392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.054 [2024-12-08 06:32:18.960416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.054 [2024-12-08 06:32:18.960430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.054 [2024-12-08 06:32:18.960443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.054 [2024-12-08 06:32:18.960472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-08 06:32:18.970329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.054 [2024-12-08 06:32:18.970412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.054 [2024-12-08 06:32:18.970436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.054 [2024-12-08 06:32:18.970451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.054 [2024-12-08 06:32:18.970463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.054 [2024-12-08 06:32:18.970493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-08 06:32:18.980335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.054 [2024-12-08 06:32:18.980415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.054 [2024-12-08 06:32:18.980439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.054 [2024-12-08 06:32:18.980453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.054 [2024-12-08 06:32:18.980466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.054 [2024-12-08 06:32:18.980495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-08 06:32:18.990425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.054 [2024-12-08 06:32:18.990516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.054 [2024-12-08 06:32:18.990540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.054 [2024-12-08 06:32:18.990554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.054 [2024-12-08 06:32:18.990566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.054 [2024-12-08 06:32:18.990595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-08 06:32:19.000391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.054 [2024-12-08 06:32:19.000502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.054 [2024-12-08 06:32:19.000526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.054 [2024-12-08 06:32:19.000541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.054 [2024-12-08 06:32:19.000554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.054 [2024-12-08 06:32:19.000584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-08 06:32:19.010401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.054 [2024-12-08 06:32:19.010490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.054 [2024-12-08 06:32:19.010514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.054 [2024-12-08 06:32:19.010529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.054 [2024-12-08 06:32:19.010541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.054 [2024-12-08 06:32:19.010570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-08 06:32:19.020444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.054 [2024-12-08 06:32:19.020574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.054 [2024-12-08 06:32:19.020606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.054 [2024-12-08 06:32:19.020621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.054 [2024-12-08 06:32:19.020634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.054 [2024-12-08 06:32:19.020663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-08 06:32:19.030453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.054 [2024-12-08 06:32:19.030542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.054 [2024-12-08 06:32:19.030567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.054 [2024-12-08 06:32:19.030581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.054 [2024-12-08 06:32:19.030594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.054 [2024-12-08 06:32:19.030623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.054 qpair failed and we were unable to recover it. 00:28:29.054 [2024-12-08 06:32:19.040518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.054 [2024-12-08 06:32:19.040610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.054 [2024-12-08 06:32:19.040633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.055 [2024-12-08 06:32:19.040653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.055 [2024-12-08 06:32:19.040666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.055 [2024-12-08 06:32:19.040696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-08 06:32:19.050502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.055 [2024-12-08 06:32:19.050586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.055 [2024-12-08 06:32:19.050612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.055 [2024-12-08 06:32:19.050627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.055 [2024-12-08 06:32:19.050640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.055 [2024-12-08 06:32:19.050669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-08 06:32:19.060535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.055 [2024-12-08 06:32:19.060629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.055 [2024-12-08 06:32:19.060653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.055 [2024-12-08 06:32:19.060666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.055 [2024-12-08 06:32:19.060678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.055 [2024-12-08 06:32:19.060734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-08 06:32:19.070578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.055 [2024-12-08 06:32:19.070667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.055 [2024-12-08 06:32:19.070690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.055 [2024-12-08 06:32:19.070704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.055 [2024-12-08 06:32:19.070715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.055 [2024-12-08 06:32:19.070772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-08 06:32:19.080638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.055 [2024-12-08 06:32:19.080750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.055 [2024-12-08 06:32:19.080775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.055 [2024-12-08 06:32:19.080789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.055 [2024-12-08 06:32:19.080802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.055 [2024-12-08 06:32:19.080838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-08 06:32:19.090628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.055 [2024-12-08 06:32:19.090739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.055 [2024-12-08 06:32:19.090764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.055 [2024-12-08 06:32:19.090779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.055 [2024-12-08 06:32:19.090791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.055 [2024-12-08 06:32:19.090822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-08 06:32:19.100694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.055 [2024-12-08 06:32:19.100810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.055 [2024-12-08 06:32:19.100835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.055 [2024-12-08 06:32:19.100850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.055 [2024-12-08 06:32:19.100863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.055 [2024-12-08 06:32:19.100894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-08 06:32:19.110674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.055 [2024-12-08 06:32:19.110789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.055 [2024-12-08 06:32:19.110814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.055 [2024-12-08 06:32:19.110829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.055 [2024-12-08 06:32:19.110841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.055 [2024-12-08 06:32:19.110872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-08 06:32:19.120715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.055 [2024-12-08 06:32:19.120816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.055 [2024-12-08 06:32:19.120840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.055 [2024-12-08 06:32:19.120854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.055 [2024-12-08 06:32:19.120867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7540000b90 00:28:29.055 [2024-12-08 06:32:19.120897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:29.055 qpair failed and we were unable to recover it. 00:28:29.055 [2024-12-08 06:32:19.120943] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:28:29.055 A controller has encountered a failure and is being reset. 00:28:29.055 [2024-12-08 06:32:19.121029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb2570 (9): Bad file descriptor 00:28:29.055 Controller properly reset. 00:28:32.337 Initializing NVMe Controllers 00:28:32.337 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:32.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:32.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:32.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:32.337 Initialization complete. Launching workers. 00:28:32.337 Starting thread on core 1 00:28:32.337 Starting thread on core 2 00:28:32.337 Starting thread on core 3 00:28:32.337 Starting thread on core 0 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:32.337 00:28:32.337 real 0m10.704s 00:28:32.337 user 0m25.888s 00:28:32.337 sys 0m6.412s 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.337 ************************************ 00:28:32.337 END TEST nvmf_target_disconnect_tc2 00:28:32.337 ************************************ 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:32.337 rmmod nvme_tcp 00:28:32.337 rmmod nvme_fabrics 00:28:32.337 rmmod nvme_keyring 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1180543 ']' 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1180543 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1180543 ']' 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1180543 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1180543 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1180543' 00:28:32.337 killing process with pid 1180543 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1180543 00:28:32.337 06:32:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1180543 00:28:32.337 06:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:32.337 06:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:32.337 06:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:32.337 06:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:32.337 06:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:32.337 06:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:32.337 06:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:32.337 06:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:32.337 06:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:32.337 06:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.337 06:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.337 06:32:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.242 06:32:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:34.242 00:28:34.242 real 0m15.714s 00:28:34.242 user 0m51.710s 00:28:34.242 sys 0m8.786s 00:28:34.242 06:32:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:34.242 06:32:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:34.242 ************************************ 00:28:34.242 END TEST nvmf_target_disconnect 00:28:34.242 ************************************ 00:28:34.242 06:32:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:34.242 00:28:34.242 real 5m11.454s 00:28:34.242 user 11m6.576s 00:28:34.242 sys 1m18.685s 00:28:34.242 06:32:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:34.242 06:32:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.242 ************************************ 00:28:34.242 END TEST nvmf_host 00:28:34.242 ************************************ 00:28:34.242 06:32:24 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:34.242 06:32:24 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:34.242 06:32:24 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:34.242 06:32:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:34.242 06:32:24 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:34.242 06:32:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:34.242 ************************************ 00:28:34.242 START TEST nvmf_target_core_interrupt_mode 00:28:34.242 ************************************ 00:28:34.242 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:34.501 * Looking for test storage... 00:28:34.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:34.501 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:34.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.501 --rc genhtml_branch_coverage=1 00:28:34.502 --rc genhtml_function_coverage=1 00:28:34.502 --rc genhtml_legend=1 00:28:34.502 --rc geninfo_all_blocks=1 00:28:34.502 --rc geninfo_unexecuted_blocks=1 00:28:34.502 00:28:34.502 ' 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:34.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.502 --rc genhtml_branch_coverage=1 00:28:34.502 --rc genhtml_function_coverage=1 00:28:34.502 --rc genhtml_legend=1 00:28:34.502 --rc geninfo_all_blocks=1 00:28:34.502 --rc geninfo_unexecuted_blocks=1 00:28:34.502 00:28:34.502 ' 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:34.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.502 --rc genhtml_branch_coverage=1 00:28:34.502 --rc genhtml_function_coverage=1 00:28:34.502 --rc genhtml_legend=1 00:28:34.502 --rc geninfo_all_blocks=1 00:28:34.502 --rc geninfo_unexecuted_blocks=1 00:28:34.502 00:28:34.502 ' 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:34.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.502 --rc genhtml_branch_coverage=1 00:28:34.502 --rc genhtml_function_coverage=1 00:28:34.502 --rc genhtml_legend=1 00:28:34.502 --rc geninfo_all_blocks=1 00:28:34.502 --rc geninfo_unexecuted_blocks=1 00:28:34.502 00:28:34.502 ' 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:34.502 ************************************ 00:28:34.502 START TEST nvmf_abort 00:28:34.502 ************************************ 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:34.502 * Looking for test storage... 00:28:34.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:28:34.502 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:34.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.762 --rc genhtml_branch_coverage=1 00:28:34.762 --rc genhtml_function_coverage=1 00:28:34.762 --rc genhtml_legend=1 00:28:34.762 --rc geninfo_all_blocks=1 00:28:34.762 --rc geninfo_unexecuted_blocks=1 00:28:34.762 00:28:34.762 ' 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:34.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.762 --rc genhtml_branch_coverage=1 00:28:34.762 --rc genhtml_function_coverage=1 00:28:34.762 --rc genhtml_legend=1 00:28:34.762 --rc geninfo_all_blocks=1 00:28:34.762 --rc geninfo_unexecuted_blocks=1 00:28:34.762 00:28:34.762 ' 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:34.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.762 --rc genhtml_branch_coverage=1 00:28:34.762 --rc genhtml_function_coverage=1 00:28:34.762 --rc genhtml_legend=1 00:28:34.762 --rc geninfo_all_blocks=1 00:28:34.762 --rc geninfo_unexecuted_blocks=1 00:28:34.762 00:28:34.762 ' 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:34.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.762 --rc genhtml_branch_coverage=1 00:28:34.762 --rc genhtml_function_coverage=1 00:28:34.762 --rc genhtml_legend=1 00:28:34.762 --rc geninfo_all_blocks=1 00:28:34.762 --rc geninfo_unexecuted_blocks=1 00:28:34.762 00:28:34.762 ' 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:34.762 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:34.763 06:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:36.663 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:36.663 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:36.663 Found net devices under 0000:84:00.0: cvl_0_0 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:36.663 Found net devices under 0000:84:00.1: cvl_0_1 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.663 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.664 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:36.664 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:36.664 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.664 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.664 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:36.664 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:36.664 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.664 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.664 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.664 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.664 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:36.664 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:36.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:28:36.922 00:28:36.922 --- 10.0.0.2 ping statistics --- 00:28:36.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.922 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:28:36.922 00:28:36.922 --- 10.0.0.1 ping statistics --- 00:28:36.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.922 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1183362 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1183362 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1183362 ']' 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.922 06:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:36.922 [2024-12-08 06:32:26.915971] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:36.922 [2024-12-08 06:32:26.917087] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:28:36.922 [2024-12-08 06:32:26.917143] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.922 [2024-12-08 06:32:26.991368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:37.181 [2024-12-08 06:32:27.051572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.181 [2024-12-08 06:32:27.051636] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.181 [2024-12-08 06:32:27.051663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.181 [2024-12-08 06:32:27.051674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.181 [2024-12-08 06:32:27.051684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.181 [2024-12-08 06:32:27.053312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:37.181 [2024-12-08 06:32:27.053368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:37.181 [2024-12-08 06:32:27.053371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.181 [2024-12-08 06:32:27.140606] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:37.181 [2024-12-08 06:32:27.140827] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:37.181 [2024-12-08 06:32:27.140847] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:37.181 [2024-12-08 06:32:27.141096] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:37.181 [2024-12-08 06:32:27.190103] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:37.181 Malloc0 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:37.181 Delay0 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:37.181 [2024-12-08 06:32:27.258312] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.181 06:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:37.440 [2024-12-08 06:32:27.360740] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:39.341 Initializing NVMe Controllers 00:28:39.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:39.341 controller IO queue size 128 less than required 00:28:39.341 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:39.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:39.341 Initialization complete. Launching workers. 00:28:39.341 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28754 00:28:39.341 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28815, failed to submit 66 00:28:39.341 success 28754, unsuccessful 61, failed 0 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:39.341 rmmod nvme_tcp 00:28:39.341 rmmod nvme_fabrics 00:28:39.341 rmmod nvme_keyring 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1183362 ']' 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1183362 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1183362 ']' 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1183362 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.341 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1183362 00:28:39.600 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:39.600 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:39.600 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1183362' 00:28:39.600 killing process with pid 1183362 00:28:39.600 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1183362 00:28:39.600 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1183362 00:28:39.600 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:39.600 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:39.600 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:39.600 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:39.600 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:39.600 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:39.600 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:39.600 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:39.600 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:39.600 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.600 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.600 06:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:42.146 00:28:42.146 real 0m7.208s 00:28:42.146 user 0m9.111s 00:28:42.146 sys 0m2.863s 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:42.146 ************************************ 00:28:42.146 END TEST nvmf_abort 00:28:42.146 ************************************ 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:42.146 ************************************ 00:28:42.146 START TEST nvmf_ns_hotplug_stress 00:28:42.146 ************************************ 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:42.146 * Looking for test storage... 00:28:42.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:42.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.146 --rc genhtml_branch_coverage=1 00:28:42.146 --rc genhtml_function_coverage=1 00:28:42.146 --rc genhtml_legend=1 00:28:42.146 --rc geninfo_all_blocks=1 00:28:42.146 --rc geninfo_unexecuted_blocks=1 00:28:42.146 00:28:42.146 ' 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:42.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.146 --rc genhtml_branch_coverage=1 00:28:42.146 --rc genhtml_function_coverage=1 00:28:42.146 --rc genhtml_legend=1 00:28:42.146 --rc geninfo_all_blocks=1 00:28:42.146 --rc geninfo_unexecuted_blocks=1 00:28:42.146 00:28:42.146 ' 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:42.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.146 --rc genhtml_branch_coverage=1 00:28:42.146 --rc genhtml_function_coverage=1 00:28:42.146 --rc genhtml_legend=1 00:28:42.146 --rc geninfo_all_blocks=1 00:28:42.146 --rc geninfo_unexecuted_blocks=1 00:28:42.146 00:28:42.146 ' 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:42.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.146 --rc genhtml_branch_coverage=1 00:28:42.146 --rc genhtml_function_coverage=1 00:28:42.146 --rc genhtml_legend=1 00:28:42.146 --rc geninfo_all_blocks=1 00:28:42.146 --rc geninfo_unexecuted_blocks=1 00:28:42.146 00:28:42.146 ' 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.146 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:42.147 06:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:44.070 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:44.071 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:44.071 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:44.071 Found net devices under 0000:84:00.0: cvl_0_0 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:44.071 Found net devices under 0000:84:00.1: cvl_0_1 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:44.071 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:44.331 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:44.331 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:44.331 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:44.331 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:44.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:44.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:28:44.331 00:28:44.331 --- 10.0.0.2 ping statistics --- 00:28:44.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.331 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:28:44.331 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:44.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:44.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:28:44.331 00:28:44.331 --- 10.0.0.1 ping statistics --- 00:28:44.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.331 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:28:44.331 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:44.331 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:44.331 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:44.331 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:44.331 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:44.331 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:44.331 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:44.331 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:44.331 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:44.331 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:44.331 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:44.331 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:44.331 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:44.332 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1185714 00:28:44.332 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:44.332 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1185714 00:28:44.332 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1185714 ']' 00:28:44.332 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.332 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:44.332 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.332 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:44.332 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:44.332 [2024-12-08 06:32:34.283157] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:44.332 [2024-12-08 06:32:34.284254] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:28:44.332 [2024-12-08 06:32:34.284309] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.332 [2024-12-08 06:32:34.353452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:44.332 [2024-12-08 06:32:34.406564] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:44.332 [2024-12-08 06:32:34.406625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:44.332 [2024-12-08 06:32:34.406654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:44.332 [2024-12-08 06:32:34.406665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:44.332 [2024-12-08 06:32:34.406674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:44.332 [2024-12-08 06:32:34.408296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.332 [2024-12-08 06:32:34.408366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.332 [2024-12-08 06:32:34.408363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:44.591 [2024-12-08 06:32:34.491907] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:44.591 [2024-12-08 06:32:34.492130] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:44.591 [2024-12-08 06:32:34.492168] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:44.591 [2024-12-08 06:32:34.492393] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:44.591 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:44.591 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:44.591 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:44.591 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:44.591 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:44.591 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:44.591 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:44.591 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:44.852 [2024-12-08 06:32:34.793110] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.852 06:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:45.109 06:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:45.365 [2024-12-08 06:32:35.341457] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.365 06:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:45.622 06:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:45.879 Malloc0 00:28:45.879 06:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:46.137 Delay0 00:28:46.137 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:46.394 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:46.651 NULL1 00:28:46.651 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:46.909 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1186014 00:28:46.909 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:46.909 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:28:46.909 06:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.285 Read completed with error (sct=0, sc=11) 00:28:48.285 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:48.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.542 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:48.542 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:48.799 true 00:28:48.799 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:28:48.799 06:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.730 06:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:49.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.988 06:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:49.988 06:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:50.245 true 00:28:50.245 06:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:28:50.245 06:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:50.814 06:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:51.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:51.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:51.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:51.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:51.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:51.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:51.330 06:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:51.330 06:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:51.587 true 00:28:51.587 06:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:28:51.587 06:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:51.845 06:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:52.104 06:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:52.104 06:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:52.363 true 00:28:52.363 06:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:28:52.363 06:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:53.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.298 06:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:53.556 06:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:53.556 06:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:53.814 true 00:28:53.814 06:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:28:53.814 06:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:54.071 06:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:54.329 06:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:54.329 06:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:54.587 true 00:28:54.588 06:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:28:54.588 06:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:54.845 06:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:55.103 06:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:55.103 06:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:55.361 true 00:28:55.361 06:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:28:55.361 06:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:56.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:56.296 06:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:56.555 06:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:56.555 06:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:56.814 true 00:28:56.814 06:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:28:56.814 06:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.072 06:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:57.331 06:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:57.331 06:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:57.589 true 00:28:57.589 06:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:28:57.589 06:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.847 06:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:58.108 06:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:58.108 06:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:58.674 true 00:28:58.674 06:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:28:58.674 06:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.611 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:59.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.611 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:59.611 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:59.868 true 00:28:59.868 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:28:59.868 06:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.126 06:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:00.763 06:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:00.763 06:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:00.763 true 00:29:00.763 06:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:29:00.763 06:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:01.749 06:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.749 06:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:01.749 06:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:02.316 true 00:29:02.316 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:29:02.316 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.316 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:02.574 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:02.574 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:02.842 true 00:29:02.842 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:29:02.842 06:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:03.405 06:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:03.405 06:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:03.405 06:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:03.662 true 00:29:03.920 06:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:29:03.920 06:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.854 06:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:05.114 06:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:05.114 06:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:05.114 true 00:29:05.372 06:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:29:05.372 06:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.629 06:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:05.886 06:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:05.886 06:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:06.144 true 00:29:06.144 06:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:29:06.144 06:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.402 06:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.661 06:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:06.661 06:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:06.919 true 00:29:06.919 06:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:29:06.919 06:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.855 06:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:07.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.113 06:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:08.114 06:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:08.371 true 00:29:08.371 06:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:29:08.371 06:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.630 06:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:08.889 06:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:08.889 06:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:09.148 true 00:29:09.148 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:29:09.148 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:09.406 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:09.664 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:09.664 06:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:10.231 true 00:29:10.231 06:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:29:10.231 06:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.165 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.423 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:11.423 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:11.681 true 00:29:11.681 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:29:11.681 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.939 06:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.198 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:12.198 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:12.457 true 00:29:12.457 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:29:12.457 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:12.715 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.973 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:12.973 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:13.231 true 00:29:13.231 06:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:29:13.231 06:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:14.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.166 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.424 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:14.424 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:14.681 true 00:29:14.681 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:29:14.681 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:14.938 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:15.196 06:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:15.196 06:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:15.453 true 00:29:15.710 06:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:29:15.710 06:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:16.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:16.275 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.532 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:16.532 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:16.790 true 00:29:16.790 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:29:16.790 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:17.048 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:17.306 Initializing NVMe Controllers 00:29:17.306 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.306 Controller IO queue size 128, less than required. 00:29:17.306 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:17.306 Controller IO queue size 128, less than required. 00:29:17.306 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:17.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:17.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:17.306 Initialization complete. Launching workers. 00:29:17.306 ======================================================== 00:29:17.306 Latency(us) 00:29:17.306 Device Information : IOPS MiB/s Average min max 00:29:17.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1241.90 0.61 48147.72 2844.16 1117376.49 00:29:17.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9860.25 4.81 12980.70 2591.08 460228.47 00:29:17.306 ======================================================== 00:29:17.306 Total : 11102.15 5.42 16914.53 2591.08 1117376.49 00:29:17.306 00:29:17.306 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:17.306 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:17.872 true 00:29:17.872 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1186014 00:29:17.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1186014) - No such process 00:29:17.872 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1186014 00:29:17.872 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:18.129 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:18.387 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:18.387 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:18.387 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:18.387 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:18.387 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:18.645 null0 00:29:18.645 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:18.645 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:18.645 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:18.903 null1 00:29:18.903 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:18.903 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:18.903 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:19.161 null2 00:29:19.161 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:19.161 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:19.161 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:19.420 null3 00:29:19.421 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:19.421 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:19.421 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:19.679 null4 00:29:19.679 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:19.679 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:19.679 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:19.937 null5 00:29:19.937 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:19.937 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:19.937 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:20.194 null6 00:29:20.194 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:20.194 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:20.194 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:20.454 null7 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:20.454 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1190635 1190636 1190638 1190640 1190642 1190644 1190646 1190648 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.455 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:20.715 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:20.715 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:20.715 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:20.715 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:20.715 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:20.715 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:20.715 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:20.715 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:21.283 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:21.542 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:21.542 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:21.542 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.542 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:21.542 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:21.542 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:21.542 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:21.800 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.800 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.800 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:21.800 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.800 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.800 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:21.800 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.801 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.801 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:21.801 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.801 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.801 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.801 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.801 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:21.801 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:21.801 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.801 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.801 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:21.801 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.801 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.801 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:21.801 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.801 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.801 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:22.059 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:22.059 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:22.059 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:22.059 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:22.059 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.059 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:22.059 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:22.059 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:22.318 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.318 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.318 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:22.318 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.318 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.318 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:22.319 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.319 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.319 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:22.319 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.319 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.319 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:22.319 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.319 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.319 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.319 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:22.319 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.319 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:22.319 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.319 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.319 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:22.319 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.319 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.319 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:22.578 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:22.578 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:22.578 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:22.578 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:22.578 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:22.578 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:22.578 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:22.578 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.836 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.836 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.836 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:22.836 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.836 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.836 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:22.836 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.836 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.836 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:22.836 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.836 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.836 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:22.836 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.836 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.836 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:22.836 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.836 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.837 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:22.837 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.837 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.837 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:22.837 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.837 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.837 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:23.095 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:23.095 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:23.095 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:23.095 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:23.095 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:23.095 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:23.095 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:23.354 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.612 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.612 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.612 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:23.612 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.612 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.613 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:23.871 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:23.871 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:23.871 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:23.871 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:23.871 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.871 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:23.871 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:23.871 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:24.129 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.129 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.129 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:24.129 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.129 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.129 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:24.129 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.129 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.129 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:24.129 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.129 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.129 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:24.129 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.129 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.129 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:24.129 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.129 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.130 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:24.130 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.130 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.130 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.130 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.130 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:24.130 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:24.388 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:24.388 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:24.388 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:24.388 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:24.388 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.388 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:24.388 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:24.388 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:24.646 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.646 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.646 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:24.646 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.646 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.646 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:24.646 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.646 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.646 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:24.646 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.646 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.646 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:24.646 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.646 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.646 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:24.646 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.646 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.646 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:24.647 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.647 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.647 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:24.647 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:24.647 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:24.647 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:24.905 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:24.905 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:24.905 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:24.905 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:24.905 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:24.905 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.905 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:24.905 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:25.478 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:25.735 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:25.735 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:25.735 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:25.735 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:25.735 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:25.993 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:26.251 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:26.251 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:26.251 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:26.251 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:26.251 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.251 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:26.251 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:26.251 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:26.510 rmmod nvme_tcp 00:29:26.510 rmmod nvme_fabrics 00:29:26.510 rmmod nvme_keyring 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1185714 ']' 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1185714 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1185714 ']' 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1185714 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1185714 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1185714' 00:29:26.510 killing process with pid 1185714 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1185714 00:29:26.510 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1185714 00:29:26.768 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:26.768 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:26.768 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:26.768 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:26.768 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:26.768 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:26.768 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:26.768 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:26.768 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:26.768 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.768 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.768 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.303 06:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:29.303 00:29:29.303 real 0m47.088s 00:29:29.303 user 3m17.401s 00:29:29.304 sys 0m21.696s 00:29:29.304 06:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.304 06:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:29.304 ************************************ 00:29:29.304 END TEST nvmf_ns_hotplug_stress 00:29:29.304 ************************************ 00:29:29.304 06:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:29.304 06:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:29.304 06:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.304 06:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:29.304 ************************************ 00:29:29.304 START TEST nvmf_delete_subsystem 00:29:29.304 ************************************ 00:29:29.304 06:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:29.304 * Looking for test storage... 00:29:29.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:29.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.304 --rc genhtml_branch_coverage=1 00:29:29.304 --rc genhtml_function_coverage=1 00:29:29.304 --rc genhtml_legend=1 00:29:29.304 --rc geninfo_all_blocks=1 00:29:29.304 --rc geninfo_unexecuted_blocks=1 00:29:29.304 00:29:29.304 ' 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:29.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.304 --rc genhtml_branch_coverage=1 00:29:29.304 --rc genhtml_function_coverage=1 00:29:29.304 --rc genhtml_legend=1 00:29:29.304 --rc geninfo_all_blocks=1 00:29:29.304 --rc geninfo_unexecuted_blocks=1 00:29:29.304 00:29:29.304 ' 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:29.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.304 --rc genhtml_branch_coverage=1 00:29:29.304 --rc genhtml_function_coverage=1 00:29:29.304 --rc genhtml_legend=1 00:29:29.304 --rc geninfo_all_blocks=1 00:29:29.304 --rc geninfo_unexecuted_blocks=1 00:29:29.304 00:29:29.304 ' 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:29.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.304 --rc genhtml_branch_coverage=1 00:29:29.304 --rc genhtml_function_coverage=1 00:29:29.304 --rc genhtml_legend=1 00:29:29.304 --rc geninfo_all_blocks=1 00:29:29.304 --rc geninfo_unexecuted_blocks=1 00:29:29.304 00:29:29.304 ' 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.304 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.305 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:31.323 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:31.323 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:31.323 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:31.323 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:31.323 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:31.323 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:31.323 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:31.323 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:31.323 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:31.323 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:31.323 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:31.324 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:31.324 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:31.324 Found net devices under 0000:84:00.0: cvl_0_0 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:31.324 Found net devices under 0000:84:00.1: cvl_0_1 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:31.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:31.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:29:31.324 00:29:31.324 --- 10.0.0.2 ping statistics --- 00:29:31.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.324 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:29:31.324 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:31.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:31.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:29:31.325 00:29:31.325 --- 10.0.0.1 ping statistics --- 00:29:31.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.325 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1193435 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1193435 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1193435 ']' 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:31.325 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:31.325 [2024-12-08 06:33:21.339739] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:31.325 [2024-12-08 06:33:21.340932] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:29:31.325 [2024-12-08 06:33:21.340996] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.325 [2024-12-08 06:33:21.417194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:31.584 [2024-12-08 06:33:21.476906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.584 [2024-12-08 06:33:21.476963] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.584 [2024-12-08 06:33:21.477004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.584 [2024-12-08 06:33:21.477016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.584 [2024-12-08 06:33:21.477026] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.584 [2024-12-08 06:33:21.478471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.584 [2024-12-08 06:33:21.478476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.584 [2024-12-08 06:33:21.565923] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:31.584 [2024-12-08 06:33:21.565931] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:31.584 [2024-12-08 06:33:21.566190] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:31.584 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.584 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:31.584 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:31.584 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:31.584 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:31.584 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.584 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:31.584 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.584 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:31.584 [2024-12-08 06:33:21.615081] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.584 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.584 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:31.584 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.584 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:31.584 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.584 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.584 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.584 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:31.585 [2024-12-08 06:33:21.635299] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.585 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.585 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:31.585 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.585 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:31.585 NULL1 00:29:31.585 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.585 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:31.585 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.585 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:31.585 Delay0 00:29:31.585 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.585 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:31.585 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.585 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:31.585 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.585 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1193574 00:29:31.585 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:31.585 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:31.844 [2024-12-08 06:33:21.710965] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:33.750 06:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:33.750 06:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.750 06:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 starting I/O failed: -6 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 starting I/O failed: -6 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 starting I/O failed: -6 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 starting I/O failed: -6 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 starting I/O failed: -6 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 starting I/O failed: -6 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 starting I/O failed: -6 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 starting I/O failed: -6 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 starting I/O failed: -6 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 starting I/O failed: -6 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 starting I/O failed: -6 00:29:33.750 [2024-12-08 06:33:23.843893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11094a0 is same with the state(6) to be set 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 starting I/O failed: -6 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 starting I/O failed: -6 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 starting I/O failed: -6 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 starting I/O failed: -6 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 starting I/O failed: -6 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 starting I/O failed: -6 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Write completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.750 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 starting I/O failed: -6 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 starting I/O failed: -6 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 starting I/O failed: -6 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 starting I/O failed: -6 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 [2024-12-08 06:33:23.844587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5b94000c40 is same with the state(6) to be set 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:33.751 Read completed with error (sct=0, sc=8) 00:29:33.751 Write completed with error (sct=0, sc=8) 00:29:34.716 [2024-12-08 06:33:24.807418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a9b0 is same with the state(6) to be set 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 [2024-12-08 06:33:24.846833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5b9400d020 is same with the state(6) to be set 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 [2024-12-08 06:33:24.847940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5b9400d7e0 is same with the state(6) to be set 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 [2024-12-08 06:33:24.848547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1109680 is same with the state(6) to be set 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Read completed with error (sct=0, sc=8) 00:29:34.974 Write completed with error (sct=0, sc=8) 00:29:34.974 [2024-12-08 06:33:24.849102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11092c0 is same with the state(6) to be set 00:29:34.974 Initializing NVMe Controllers 00:29:34.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.975 Controller IO queue size 128, less than required. 00:29:34.975 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:34.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:34.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:34.975 Initialization complete. Launching workers. 00:29:34.975 ======================================================== 00:29:34.975 Latency(us) 00:29:34.975 Device Information : IOPS MiB/s Average min max 00:29:34.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.37 0.08 913695.29 688.80 1011969.86 00:29:34.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.42 0.08 992854.12 369.16 2002336.09 00:29:34.975 ======================================================== 00:29:34.975 Total : 317.80 0.16 952408.91 369.16 2002336.09 00:29:34.975 00:29:34.975 [2024-12-08 06:33:24.849590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110a9b0 (9): Bad file descriptor 00:29:34.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:34.975 06:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.975 06:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:34.975 06:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1193574 00:29:34.975 06:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1193574 00:29:35.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1193574) - No such process 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1193574 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1193574 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1193574 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:35.539 [2024-12-08 06:33:25.371312] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1193973 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:35.539 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1193973 00:29:35.540 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:35.540 [2024-12-08 06:33:25.432276] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:35.797 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:35.797 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1193973 00:29:35.797 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:36.366 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:36.366 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1193973 00:29:36.366 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:36.934 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:36.934 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1193973 00:29:36.934 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:37.502 06:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:37.502 06:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1193973 00:29:37.502 06:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:38.072 06:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:38.072 06:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1193973 00:29:38.072 06:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:38.331 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:38.331 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1193973 00:29:38.331 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:38.590 Initializing NVMe Controllers 00:29:38.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:38.590 Controller IO queue size 128, less than required. 00:29:38.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:38.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:38.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:38.590 Initialization complete. Launching workers. 00:29:38.590 ======================================================== 00:29:38.590 Latency(us) 00:29:38.590 Device Information : IOPS MiB/s Average min max 00:29:38.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003436.70 1000188.25 1010834.72 00:29:38.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006017.80 1000259.83 1042887.57 00:29:38.590 ======================================================== 00:29:38.590 Total : 256.00 0.12 1004727.25 1000188.25 1042887.57 00:29:38.590 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1193973 00:29:38.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1193973) - No such process 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1193973 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:38.849 rmmod nvme_tcp 00:29:38.849 rmmod nvme_fabrics 00:29:38.849 rmmod nvme_keyring 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1193435 ']' 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1193435 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1193435 ']' 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1193435 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.849 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1193435 00:29:39.108 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:39.108 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:39.108 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1193435' 00:29:39.108 killing process with pid 1193435 00:29:39.108 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1193435 00:29:39.108 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1193435 00:29:39.108 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:39.108 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:39.108 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:39.108 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:39.108 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:39.108 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:39.108 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:39.108 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:39.108 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:39.108 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.108 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.108 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:41.650 00:29:41.650 real 0m12.282s 00:29:41.650 user 0m24.749s 00:29:41.650 sys 0m3.618s 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:41.650 ************************************ 00:29:41.650 END TEST nvmf_delete_subsystem 00:29:41.650 ************************************ 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:41.650 ************************************ 00:29:41.650 START TEST nvmf_host_management 00:29:41.650 ************************************ 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:41.650 * Looking for test storage... 00:29:41.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:41.650 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:41.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.651 --rc genhtml_branch_coverage=1 00:29:41.651 --rc genhtml_function_coverage=1 00:29:41.651 --rc genhtml_legend=1 00:29:41.651 --rc geninfo_all_blocks=1 00:29:41.651 --rc geninfo_unexecuted_blocks=1 00:29:41.651 00:29:41.651 ' 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:41.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.651 --rc genhtml_branch_coverage=1 00:29:41.651 --rc genhtml_function_coverage=1 00:29:41.651 --rc genhtml_legend=1 00:29:41.651 --rc geninfo_all_blocks=1 00:29:41.651 --rc geninfo_unexecuted_blocks=1 00:29:41.651 00:29:41.651 ' 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:41.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.651 --rc genhtml_branch_coverage=1 00:29:41.651 --rc genhtml_function_coverage=1 00:29:41.651 --rc genhtml_legend=1 00:29:41.651 --rc geninfo_all_blocks=1 00:29:41.651 --rc geninfo_unexecuted_blocks=1 00:29:41.651 00:29:41.651 ' 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:41.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.651 --rc genhtml_branch_coverage=1 00:29:41.651 --rc genhtml_function_coverage=1 00:29:41.651 --rc genhtml_legend=1 00:29:41.651 --rc geninfo_all_blocks=1 00:29:41.651 --rc geninfo_unexecuted_blocks=1 00:29:41.651 00:29:41.651 ' 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:41.651 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:41.652 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:41.652 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.652 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:41.652 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:41.652 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:41.652 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.652 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.652 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.652 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:41.652 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:41.652 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:41.652 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:43.555 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:43.555 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.555 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:43.556 Found net devices under 0000:84:00.0: cvl_0_0 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:43.556 Found net devices under 0000:84:00.1: cvl_0_1 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:43.556 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:43.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:43.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:29:43.815 00:29:43.815 --- 10.0.0.2 ping statistics --- 00:29:43.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.815 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:43.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:43.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:29:43.815 00:29:43.815 --- 10.0.0.1 ping statistics --- 00:29:43.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.815 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1196348 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1196348 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1196348 ']' 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.815 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:43.816 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.816 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:43.816 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:43.816 [2024-12-08 06:33:33.785391] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:43.816 [2024-12-08 06:33:33.786474] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:29:43.816 [2024-12-08 06:33:33.786543] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.816 [2024-12-08 06:33:33.859636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:43.816 [2024-12-08 06:33:33.922745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.816 [2024-12-08 06:33:33.922811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.816 [2024-12-08 06:33:33.922827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.816 [2024-12-08 06:33:33.922841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.816 [2024-12-08 06:33:33.922854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.816 [2024-12-08 06:33:33.924584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:43.816 [2024-12-08 06:33:33.924643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:43.816 [2024-12-08 06:33:33.924716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:43.816 [2024-12-08 06:33:33.924719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.075 [2024-12-08 06:33:34.015959] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:44.075 [2024-12-08 06:33:34.016179] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:44.075 [2024-12-08 06:33:34.016477] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:44.075 [2024-12-08 06:33:34.017169] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:44.075 [2024-12-08 06:33:34.017374] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:44.075 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:44.075 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:44.075 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:44.075 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:44.075 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:44.075 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:44.075 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:44.075 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.075 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:44.075 [2024-12-08 06:33:34.073475] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:44.075 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.075 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:44.075 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:44.075 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:44.075 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:44.075 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:44.075 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:44.075 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:44.076 Malloc0 00:29:44.076 [2024-12-08 06:33:34.145684] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1196510 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1196510 /var/tmp/bdevperf.sock 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1196510 ']' 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:44.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:44.076 { 00:29:44.076 "params": { 00:29:44.076 "name": "Nvme$subsystem", 00:29:44.076 "trtype": "$TEST_TRANSPORT", 00:29:44.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.076 "adrfam": "ipv4", 00:29:44.076 "trsvcid": "$NVMF_PORT", 00:29:44.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.076 "hdgst": ${hdgst:-false}, 00:29:44.076 "ddgst": ${ddgst:-false} 00:29:44.076 }, 00:29:44.076 "method": "bdev_nvme_attach_controller" 00:29:44.076 } 00:29:44.076 EOF 00:29:44.076 )") 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:44.076 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:44.076 "params": { 00:29:44.076 "name": "Nvme0", 00:29:44.076 "trtype": "tcp", 00:29:44.076 "traddr": "10.0.0.2", 00:29:44.076 "adrfam": "ipv4", 00:29:44.076 "trsvcid": "4420", 00:29:44.076 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:44.076 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:44.076 "hdgst": false, 00:29:44.076 "ddgst": false 00:29:44.076 }, 00:29:44.076 "method": "bdev_nvme_attach_controller" 00:29:44.076 }' 00:29:44.334 [2024-12-08 06:33:34.230898] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:29:44.335 [2024-12-08 06:33:34.230977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196510 ] 00:29:44.335 [2024-12-08 06:33:34.301668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.335 [2024-12-08 06:33:34.361587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.593 Running I/O for 10 seconds... 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:29:44.593 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:44.853 [2024-12-08 06:33:34.957485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.957922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d40c0 is same with the state(6) to be set 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.853 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:44.853 [2024-12-08 06:33:34.963698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.853 [2024-12-08 06:33:34.963761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.853 [2024-12-08 06:33:34.963781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.853 [2024-12-08 06:33:34.963796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.853 [2024-12-08 06:33:34.963811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.853 [2024-12-08 06:33:34.963825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.853 [2024-12-08 06:33:34.963839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.853 [2024-12-08 06:33:34.963852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.853 [2024-12-08 06:33:34.963866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b53c60 is same with the state(6) to be set 00:29:44.853 [2024-12-08 06:33:34.964258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.853 [2024-12-08 06:33:34.964296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.853 [2024-12-08 06:33:34.964322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.853 [2024-12-08 06:33:34.964338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.853 [2024-12-08 06:33:34.964354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.853 [2024-12-08 06:33:34.964368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.853 [2024-12-08 06:33:34.964383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.853 [2024-12-08 06:33:34.964398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.853 [2024-12-08 06:33:34.964412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.853 [2024-12-08 06:33:34.964427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.853 [2024-12-08 06:33:34.964442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.853 [2024-12-08 06:33:34.964456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.853 [2024-12-08 06:33:34.964472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.853 [2024-12-08 06:33:34.964486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.853 [2024-12-08 06:33:34.964501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.853 [2024-12-08 06:33:34.964515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.853 [2024-12-08 06:33:34.964530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.853 [2024-12-08 06:33:34.964544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.853 [2024-12-08 06:33:34.964565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.853 [2024-12-08 06:33:34.964579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.853 [2024-12-08 06:33:34.964595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.853 [2024-12-08 06:33:34.964609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.964625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.964639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.964654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.964677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.964693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.964731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.964749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.964765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.964781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.964795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.964811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.964825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.964841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.964855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.964870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.964884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.964900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.964915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.964931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.964946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.964962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.964977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.964992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.854 [2024-12-08 06:33:34.965878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.854 [2024-12-08 06:33:34.965892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.855 [2024-12-08 06:33:34.965912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.855 [2024-12-08 06:33:34.965928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.855 [2024-12-08 06:33:34.965944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.855 [2024-12-08 06:33:34.965959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.855 [2024-12-08 06:33:34.965974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.855 [2024-12-08 06:33:34.965989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.855 [2024-12-08 06:33:34.966005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.855 [2024-12-08 06:33:34.966020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.855 [2024-12-08 06:33:34.966060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.855 [2024-12-08 06:33:34.966074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.855 [2024-12-08 06:33:34.966090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.855 [2024-12-08 06:33:34.966104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.855 [2024-12-08 06:33:34.966119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.855 [2024-12-08 06:33:34.966133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.855 [2024-12-08 06:33:34.966148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.855 [2024-12-08 06:33:34.966162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.855 [2024-12-08 06:33:34.966177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.855 [2024-12-08 06:33:34.966190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.855 [2024-12-08 06:33:34.966205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.855 [2024-12-08 06:33:34.966220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.855 [2024-12-08 06:33:34.966236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.855 [2024-12-08 06:33:34.966256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.855 [2024-12-08 06:33:34.966271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.855 [2024-12-08 06:33:34.966285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.855 [2024-12-08 06:33:34.966299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.855 [2024-12-08 06:33:34.966328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.855 [2024-12-08 06:33:34.966345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.855 [2024-12-08 06:33:34.966359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.855 [2024-12-08 06:33:34.967575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:44.855 task offset: 73728 on job bdev=Nvme0n1 fails 00:29:44.855 00:29:44.855 Latency(us) 00:29:44.855 [2024-12-08T05:33:34.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.855 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.855 Job: Nvme0n1 ended in about 0.38 seconds with error 00:29:44.855 Verification LBA range: start 0x0 length 0x400 00:29:44.855 Nvme0n1 : 0.38 1496.68 93.54 166.30 0.00 37365.40 3106.89 33787.45 00:29:44.855 [2024-12-08T05:33:34.974Z] =================================================================================================================== 00:29:44.855 [2024-12-08T05:33:34.974Z] Total : 1496.68 93.54 166.30 0.00 37365.40 3106.89 33787.45 00:29:44.855 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.855 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:44.855 [2024-12-08 06:33:34.970413] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:44.855 [2024-12-08 06:33:34.970459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b53c60 (9): Bad file descriptor 00:29:45.113 [2024-12-08 06:33:34.974807] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:46.052 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1196510 00:29:46.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1196510) - No such process 00:29:46.052 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:46.052 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:46.052 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:46.052 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:46.052 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:46.052 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:46.052 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:46.052 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:46.052 { 00:29:46.052 "params": { 00:29:46.052 "name": "Nvme$subsystem", 00:29:46.052 "trtype": "$TEST_TRANSPORT", 00:29:46.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:46.052 "adrfam": "ipv4", 00:29:46.052 "trsvcid": "$NVMF_PORT", 00:29:46.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:46.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:46.052 "hdgst": ${hdgst:-false}, 00:29:46.052 "ddgst": ${ddgst:-false} 00:29:46.052 }, 00:29:46.052 "method": "bdev_nvme_attach_controller" 00:29:46.052 } 00:29:46.052 EOF 00:29:46.052 )") 00:29:46.052 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:46.052 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:46.052 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:46.052 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:46.052 "params": { 00:29:46.052 "name": "Nvme0", 00:29:46.052 "trtype": "tcp", 00:29:46.052 "traddr": "10.0.0.2", 00:29:46.052 "adrfam": "ipv4", 00:29:46.052 "trsvcid": "4420", 00:29:46.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:46.052 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:46.052 "hdgst": false, 00:29:46.052 "ddgst": false 00:29:46.052 }, 00:29:46.052 "method": "bdev_nvme_attach_controller" 00:29:46.052 }' 00:29:46.052 [2024-12-08 06:33:36.022369] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:29:46.052 [2024-12-08 06:33:36.022450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196668 ] 00:29:46.052 [2024-12-08 06:33:36.092272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.052 [2024-12-08 06:33:36.151131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.619 Running I/O for 1 seconds... 00:29:47.555 1600.00 IOPS, 100.00 MiB/s 00:29:47.555 Latency(us) 00:29:47.555 [2024-12-08T05:33:37.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.555 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:47.555 Verification LBA range: start 0x0 length 0x400 00:29:47.555 Nvme0n1 : 1.03 1620.56 101.28 0.00 0.00 38862.56 5534.15 34175.81 00:29:47.555 [2024-12-08T05:33:37.674Z] =================================================================================================================== 00:29:47.555 [2024-12-08T05:33:37.674Z] Total : 1620.56 101.28 0.00 0.00 38862.56 5534.15 34175.81 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:47.816 rmmod nvme_tcp 00:29:47.816 rmmod nvme_fabrics 00:29:47.816 rmmod nvme_keyring 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1196348 ']' 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1196348 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1196348 ']' 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1196348 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196348 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196348' 00:29:47.816 killing process with pid 1196348 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1196348 00:29:47.816 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1196348 00:29:48.076 [2024-12-08 06:33:38.085843] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:48.076 06:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:48.076 06:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:48.076 06:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:48.076 06:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:48.076 06:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:48.076 06:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:48.076 06:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:48.076 06:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:48.076 06:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:48.076 06:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.076 06:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.076 06:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:50.608 00:29:50.608 real 0m8.874s 00:29:50.608 user 0m17.736s 00:29:50.608 sys 0m3.793s 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:50.608 ************************************ 00:29:50.608 END TEST nvmf_host_management 00:29:50.608 ************************************ 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:50.608 ************************************ 00:29:50.608 START TEST nvmf_lvol 00:29:50.608 ************************************ 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:50.608 * Looking for test storage... 00:29:50.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:50.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.608 --rc genhtml_branch_coverage=1 00:29:50.608 --rc genhtml_function_coverage=1 00:29:50.608 --rc genhtml_legend=1 00:29:50.608 --rc geninfo_all_blocks=1 00:29:50.608 --rc geninfo_unexecuted_blocks=1 00:29:50.608 00:29:50.608 ' 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:50.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.608 --rc genhtml_branch_coverage=1 00:29:50.608 --rc genhtml_function_coverage=1 00:29:50.608 --rc genhtml_legend=1 00:29:50.608 --rc geninfo_all_blocks=1 00:29:50.608 --rc geninfo_unexecuted_blocks=1 00:29:50.608 00:29:50.608 ' 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:50.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.608 --rc genhtml_branch_coverage=1 00:29:50.608 --rc genhtml_function_coverage=1 00:29:50.608 --rc genhtml_legend=1 00:29:50.608 --rc geninfo_all_blocks=1 00:29:50.608 --rc geninfo_unexecuted_blocks=1 00:29:50.608 00:29:50.608 ' 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:50.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.608 --rc genhtml_branch_coverage=1 00:29:50.608 --rc genhtml_function_coverage=1 00:29:50.608 --rc genhtml_legend=1 00:29:50.608 --rc geninfo_all_blocks=1 00:29:50.608 --rc geninfo_unexecuted_blocks=1 00:29:50.608 00:29:50.608 ' 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.608 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:50.609 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:52.513 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:52.514 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:52.514 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:52.514 Found net devices under 0000:84:00.0: cvl_0_0 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:52.514 Found net devices under 0000:84:00.1: cvl_0_1 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:52.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:29:52.514 00:29:52.514 --- 10.0.0.2 ping statistics --- 00:29:52.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.514 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:29:52.514 00:29:52.514 --- 10.0.0.1 ping statistics --- 00:29:52.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.514 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1198884 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1198884 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1198884 ']' 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.514 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:52.515 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:52.515 [2024-12-08 06:33:42.581498] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:52.515 [2024-12-08 06:33:42.582569] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:29:52.515 [2024-12-08 06:33:42.582622] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.772 [2024-12-08 06:33:42.655378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:52.772 [2024-12-08 06:33:42.712672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.772 [2024-12-08 06:33:42.712730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.772 [2024-12-08 06:33:42.712754] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.772 [2024-12-08 06:33:42.712765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.772 [2024-12-08 06:33:42.712775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.772 [2024-12-08 06:33:42.714388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.772 [2024-12-08 06:33:42.714460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:52.772 [2024-12-08 06:33:42.714464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.772 [2024-12-08 06:33:42.802944] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:52.772 [2024-12-08 06:33:42.803141] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:52.772 [2024-12-08 06:33:42.803195] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:52.772 [2024-12-08 06:33:42.803418] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:52.772 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.772 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:52.772 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:52.772 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:52.772 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:52.772 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.772 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:53.030 [2024-12-08 06:33:43.107225] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.030 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:53.599 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:53.599 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:53.859 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:53.859 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:54.117 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:54.376 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4c869f8e-e38d-4dda-87c8-fa6723f3181c 00:29:54.376 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4c869f8e-e38d-4dda-87c8-fa6723f3181c lvol 20 00:29:54.633 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=98bd333d-0dd4-4c87-acf2-d46b7a7a7278 00:29:54.633 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:54.891 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 98bd333d-0dd4-4c87-acf2-d46b7a7a7278 00:29:55.147 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:55.404 [2024-12-08 06:33:45.343336] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.404 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:55.661 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1199305 00:29:55.661 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:55.661 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:56.594 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 98bd333d-0dd4-4c87-acf2-d46b7a7a7278 MY_SNAPSHOT 00:29:57.164 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2d322d77-36d3-48f4-9209-871ecddf7924 00:29:57.164 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 98bd333d-0dd4-4c87-acf2-d46b7a7a7278 30 00:29:57.423 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2d322d77-36d3-48f4-9209-871ecddf7924 MY_CLONE 00:29:57.681 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1e63fad4-e451-474b-9a3d-4d7bca6d6f27 00:29:57.681 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1e63fad4-e451-474b-9a3d-4d7bca6d6f27 00:29:58.248 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1199305 00:30:06.367 Initializing NVMe Controllers 00:30:06.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:06.367 Controller IO queue size 128, less than required. 00:30:06.367 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:06.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:06.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:06.367 Initialization complete. Launching workers. 00:30:06.367 ======================================================== 00:30:06.367 Latency(us) 00:30:06.367 Device Information : IOPS MiB/s Average min max 00:30:06.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10555.60 41.23 12131.83 392.04 53806.04 00:30:06.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10366.90 40.50 12352.59 3815.90 61920.04 00:30:06.367 ======================================================== 00:30:06.367 Total : 20922.50 81.73 12241.22 392.04 61920.04 00:30:06.367 00:30:06.367 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:06.367 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 98bd333d-0dd4-4c87-acf2-d46b7a7a7278 00:30:06.626 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4c869f8e-e38d-4dda-87c8-fa6723f3181c 00:30:06.884 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:06.884 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:06.885 rmmod nvme_tcp 00:30:06.885 rmmod nvme_fabrics 00:30:06.885 rmmod nvme_keyring 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1198884 ']' 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1198884 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1198884 ']' 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1198884 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1198884 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1198884' 00:30:06.885 killing process with pid 1198884 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1198884 00:30:06.885 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1198884 00:30:07.144 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:07.144 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:07.144 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:07.144 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:07.144 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:07.144 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:07.144 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:07.144 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:07.144 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:07.144 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.144 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.144 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.116 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:09.116 00:30:09.116 real 0m19.002s 00:30:09.116 user 0m56.077s 00:30:09.116 sys 0m8.005s 00:30:09.116 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:09.116 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:09.116 ************************************ 00:30:09.116 END TEST nvmf_lvol 00:30:09.116 ************************************ 00:30:09.116 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:09.116 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:09.116 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:09.116 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:09.376 ************************************ 00:30:09.376 START TEST nvmf_lvs_grow 00:30:09.376 ************************************ 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:09.376 * Looking for test storage... 00:30:09.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:09.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.376 --rc genhtml_branch_coverage=1 00:30:09.376 --rc genhtml_function_coverage=1 00:30:09.376 --rc genhtml_legend=1 00:30:09.376 --rc geninfo_all_blocks=1 00:30:09.376 --rc geninfo_unexecuted_blocks=1 00:30:09.376 00:30:09.376 ' 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:09.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.376 --rc genhtml_branch_coverage=1 00:30:09.376 --rc genhtml_function_coverage=1 00:30:09.376 --rc genhtml_legend=1 00:30:09.376 --rc geninfo_all_blocks=1 00:30:09.376 --rc geninfo_unexecuted_blocks=1 00:30:09.376 00:30:09.376 ' 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:09.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.376 --rc genhtml_branch_coverage=1 00:30:09.376 --rc genhtml_function_coverage=1 00:30:09.376 --rc genhtml_legend=1 00:30:09.376 --rc geninfo_all_blocks=1 00:30:09.376 --rc geninfo_unexecuted_blocks=1 00:30:09.376 00:30:09.376 ' 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:09.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.376 --rc genhtml_branch_coverage=1 00:30:09.376 --rc genhtml_function_coverage=1 00:30:09.376 --rc genhtml_legend=1 00:30:09.376 --rc geninfo_all_blocks=1 00:30:09.376 --rc geninfo_unexecuted_blocks=1 00:30:09.376 00:30:09.376 ' 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.376 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:09.377 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.903 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:11.904 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:11.904 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:11.904 Found net devices under 0000:84:00.0: cvl_0_0 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:11.904 Found net devices under 0000:84:00.1: cvl_0_1 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:11.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:30:11.904 00:30:11.904 --- 10.0.0.2 ping statistics --- 00:30:11.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.904 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:11.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:30:11.904 00:30:11.904 --- 10.0.0.1 ping statistics --- 00:30:11.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.904 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1202578 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1202578 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1202578 ']' 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.904 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.905 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.905 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:11.905 [2024-12-08 06:34:01.824485] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:11.905 [2024-12-08 06:34:01.825556] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:30:11.905 [2024-12-08 06:34:01.825624] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.905 [2024-12-08 06:34:01.895531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.905 [2024-12-08 06:34:01.951298] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.905 [2024-12-08 06:34:01.951362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.905 [2024-12-08 06:34:01.951383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.905 [2024-12-08 06:34:01.951394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.905 [2024-12-08 06:34:01.951403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.905 [2024-12-08 06:34:01.952065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.162 [2024-12-08 06:34:02.036996] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:12.162 [2024-12-08 06:34:02.037298] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:12.162 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:12.162 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:12.162 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:12.162 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:12.162 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:12.162 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:12.162 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:12.420 [2024-12-08 06:34:02.344688] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.420 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:12.420 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:12.420 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:12.420 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:12.420 ************************************ 00:30:12.420 START TEST lvs_grow_clean 00:30:12.420 ************************************ 00:30:12.420 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:12.420 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:12.420 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:12.420 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:12.420 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:12.420 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:12.420 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:12.420 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:12.420 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:12.420 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:12.678 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:12.678 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:12.937 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4cc0cf40-09a5-4dfb-8533-0d975a1a9224 00:30:12.938 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc0cf40-09a5-4dfb-8533-0d975a1a9224 00:30:12.938 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:13.198 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:13.198 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:13.198 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4cc0cf40-09a5-4dfb-8533-0d975a1a9224 lvol 150 00:30:13.458 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f8425c0f-3157-485a-b525-77d22f6b7134 00:30:13.458 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:13.458 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:13.716 [2024-12-08 06:34:03.780599] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:13.716 [2024-12-08 06:34:03.780737] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:13.716 true 00:30:13.716 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc0cf40-09a5-4dfb-8533-0d975a1a9224 00:30:13.716 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:13.976 06:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:13.976 06:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:14.546 06:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f8425c0f-3157-485a-b525-77d22f6b7134 00:30:14.546 06:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:14.804 [2024-12-08 06:34:04.900996] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.804 06:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:15.374 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1203020 00:30:15.374 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:15.374 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:15.374 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1203020 /var/tmp/bdevperf.sock 00:30:15.374 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1203020 ']' 00:30:15.374 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:15.374 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:15.374 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:15.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:15.374 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:15.374 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:15.374 [2024-12-08 06:34:05.230538] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:30:15.374 [2024-12-08 06:34:05.230628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203020 ] 00:30:15.374 [2024-12-08 06:34:05.298105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.374 [2024-12-08 06:34:05.355918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.374 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.374 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:15.374 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:15.943 Nvme0n1 00:30:15.943 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:16.202 [ 00:30:16.202 { 00:30:16.202 "name": "Nvme0n1", 00:30:16.202 "aliases": [ 00:30:16.202 "f8425c0f-3157-485a-b525-77d22f6b7134" 00:30:16.202 ], 00:30:16.202 "product_name": "NVMe disk", 00:30:16.202 "block_size": 4096, 00:30:16.202 "num_blocks": 38912, 00:30:16.202 "uuid": "f8425c0f-3157-485a-b525-77d22f6b7134", 00:30:16.202 "numa_id": 1, 00:30:16.202 "assigned_rate_limits": { 00:30:16.202 "rw_ios_per_sec": 0, 00:30:16.202 "rw_mbytes_per_sec": 0, 00:30:16.202 "r_mbytes_per_sec": 0, 00:30:16.202 "w_mbytes_per_sec": 0 00:30:16.202 }, 00:30:16.202 "claimed": false, 00:30:16.202 "zoned": false, 00:30:16.202 "supported_io_types": { 00:30:16.202 "read": true, 00:30:16.202 "write": true, 00:30:16.202 "unmap": true, 00:30:16.202 "flush": true, 00:30:16.202 "reset": true, 00:30:16.202 "nvme_admin": true, 00:30:16.202 "nvme_io": true, 00:30:16.202 "nvme_io_md": false, 00:30:16.202 "write_zeroes": true, 00:30:16.202 "zcopy": false, 00:30:16.202 "get_zone_info": false, 00:30:16.202 "zone_management": false, 00:30:16.202 "zone_append": false, 00:30:16.202 "compare": true, 00:30:16.202 "compare_and_write": true, 00:30:16.202 "abort": true, 00:30:16.202 "seek_hole": false, 00:30:16.202 "seek_data": false, 00:30:16.202 "copy": true, 00:30:16.202 "nvme_iov_md": false 00:30:16.202 }, 00:30:16.202 "memory_domains": [ 00:30:16.202 { 00:30:16.202 "dma_device_id": "system", 00:30:16.202 "dma_device_type": 1 00:30:16.202 } 00:30:16.202 ], 00:30:16.202 "driver_specific": { 00:30:16.202 "nvme": [ 00:30:16.202 { 00:30:16.202 "trid": { 00:30:16.202 "trtype": "TCP", 00:30:16.202 "adrfam": "IPv4", 00:30:16.202 "traddr": "10.0.0.2", 00:30:16.202 "trsvcid": "4420", 00:30:16.202 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:16.202 }, 00:30:16.202 "ctrlr_data": { 00:30:16.202 "cntlid": 1, 00:30:16.202 "vendor_id": "0x8086", 00:30:16.202 "model_number": "SPDK bdev Controller", 00:30:16.202 "serial_number": "SPDK0", 00:30:16.202 "firmware_revision": "25.01", 00:30:16.202 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:16.202 "oacs": { 00:30:16.202 "security": 0, 00:30:16.202 "format": 0, 00:30:16.202 "firmware": 0, 00:30:16.202 "ns_manage": 0 00:30:16.202 }, 00:30:16.202 "multi_ctrlr": true, 00:30:16.202 "ana_reporting": false 00:30:16.202 }, 00:30:16.202 "vs": { 00:30:16.202 "nvme_version": "1.3" 00:30:16.203 }, 00:30:16.203 "ns_data": { 00:30:16.203 "id": 1, 00:30:16.203 "can_share": true 00:30:16.203 } 00:30:16.203 } 00:30:16.203 ], 00:30:16.203 "mp_policy": "active_passive" 00:30:16.203 } 00:30:16.203 } 00:30:16.203 ] 00:30:16.203 06:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1203154 00:30:16.203 06:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:16.203 06:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:16.203 Running I/O for 10 seconds... 00:30:17.138 Latency(us) 00:30:17.138 [2024-12-08T05:34:07.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:17.138 Nvme0n1 : 1.00 16510.00 64.49 0.00 0.00 0.00 0.00 0.00 00:30:17.138 [2024-12-08T05:34:07.257Z] =================================================================================================================== 00:30:17.138 [2024-12-08T05:34:07.257Z] Total : 16510.00 64.49 0.00 0.00 0.00 0.00 0.00 00:30:17.138 00:30:18.076 06:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4cc0cf40-09a5-4dfb-8533-0d975a1a9224 00:30:18.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:18.076 Nvme0n1 : 2.00 16590.50 64.81 0.00 0.00 0.00 0.00 0.00 00:30:18.076 [2024-12-08T05:34:08.195Z] =================================================================================================================== 00:30:18.076 [2024-12-08T05:34:08.195Z] Total : 16590.50 64.81 0.00 0.00 0.00 0.00 0.00 00:30:18.076 00:30:18.334 true 00:30:18.334 06:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc0cf40-09a5-4dfb-8533-0d975a1a9224 00:30:18.334 06:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:18.592 06:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:18.592 06:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:18.592 06:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1203154 00:30:19.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:19.161 Nvme0n1 : 3.00 16563.67 64.70 0.00 0.00 0.00 0.00 0.00 00:30:19.161 [2024-12-08T05:34:09.280Z] =================================================================================================================== 00:30:19.161 [2024-12-08T05:34:09.280Z] Total : 16563.67 64.70 0.00 0.00 0.00 0.00 0.00 00:30:19.161 00:30:20.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.097 Nvme0n1 : 4.00 16645.50 65.02 0.00 0.00 0.00 0.00 0.00 00:30:20.097 [2024-12-08T05:34:10.216Z] =================================================================================================================== 00:30:20.097 [2024-12-08T05:34:10.216Z] Total : 16645.50 65.02 0.00 0.00 0.00 0.00 0.00 00:30:20.097 00:30:21.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:21.472 Nvme0n1 : 5.00 16694.60 65.21 0.00 0.00 0.00 0.00 0.00 00:30:21.472 [2024-12-08T05:34:11.591Z] =================================================================================================================== 00:30:21.472 [2024-12-08T05:34:11.591Z] Total : 16694.60 65.21 0.00 0.00 0.00 0.00 0.00 00:30:21.473 00:30:22.407 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:22.407 Nvme0n1 : 6.00 16780.33 65.55 0.00 0.00 0.00 0.00 0.00 00:30:22.407 [2024-12-08T05:34:12.526Z] =================================================================================================================== 00:30:22.407 [2024-12-08T05:34:12.526Z] Total : 16780.33 65.55 0.00 0.00 0.00 0.00 0.00 00:30:22.407 00:30:23.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:23.342 Nvme0n1 : 7.00 16841.43 65.79 0.00 0.00 0.00 0.00 0.00 00:30:23.342 [2024-12-08T05:34:13.461Z] =================================================================================================================== 00:30:23.342 [2024-12-08T05:34:13.461Z] Total : 16841.43 65.79 0.00 0.00 0.00 0.00 0.00 00:30:23.342 00:30:24.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:24.277 Nvme0n1 : 8.00 16895.25 66.00 0.00 0.00 0.00 0.00 0.00 00:30:24.277 [2024-12-08T05:34:14.396Z] =================================================================================================================== 00:30:24.277 [2024-12-08T05:34:14.396Z] Total : 16895.25 66.00 0.00 0.00 0.00 0.00 0.00 00:30:24.277 00:30:25.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:25.211 Nvme0n1 : 9.00 16937.11 66.16 0.00 0.00 0.00 0.00 0.00 00:30:25.211 [2024-12-08T05:34:15.330Z] =================================================================================================================== 00:30:25.211 [2024-12-08T05:34:15.330Z] Total : 16937.11 66.16 0.00 0.00 0.00 0.00 0.00 00:30:25.211 00:30:26.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:26.151 Nvme0n1 : 10.00 16983.30 66.34 0.00 0.00 0.00 0.00 0.00 00:30:26.151 [2024-12-08T05:34:16.270Z] =================================================================================================================== 00:30:26.151 [2024-12-08T05:34:16.270Z] Total : 16983.30 66.34 0.00 0.00 0.00 0.00 0.00 00:30:26.151 00:30:26.151 00:30:26.151 Latency(us) 00:30:26.151 [2024-12-08T05:34:16.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:26.151 Nvme0n1 : 10.01 16986.14 66.35 0.00 0.00 7531.64 4369.07 16796.63 00:30:26.151 [2024-12-08T05:34:16.270Z] =================================================================================================================== 00:30:26.151 [2024-12-08T05:34:16.270Z] Total : 16986.14 66.35 0.00 0.00 7531.64 4369.07 16796.63 00:30:26.151 { 00:30:26.151 "results": [ 00:30:26.151 { 00:30:26.151 "job": "Nvme0n1", 00:30:26.151 "core_mask": "0x2", 00:30:26.151 "workload": "randwrite", 00:30:26.151 "status": "finished", 00:30:26.151 "queue_depth": 128, 00:30:26.151 "io_size": 4096, 00:30:26.151 "runtime": 10.005861, 00:30:26.151 "iops": 16986.144420754994, 00:30:26.151 "mibps": 66.3521266435742, 00:30:26.151 "io_failed": 0, 00:30:26.151 "io_timeout": 0, 00:30:26.151 "avg_latency_us": 7531.635413875994, 00:30:26.151 "min_latency_us": 4369.066666666667, 00:30:26.151 "max_latency_us": 16796.634074074074 00:30:26.151 } 00:30:26.151 ], 00:30:26.151 "core_count": 1 00:30:26.151 } 00:30:26.151 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1203020 00:30:26.151 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1203020 ']' 00:30:26.151 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1203020 00:30:26.151 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:26.151 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:26.151 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1203020 00:30:26.151 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:26.151 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:26.151 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1203020' 00:30:26.151 killing process with pid 1203020 00:30:26.151 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1203020 00:30:26.151 Received shutdown signal, test time was about 10.000000 seconds 00:30:26.151 00:30:26.151 Latency(us) 00:30:26.151 [2024-12-08T05:34:16.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.151 [2024-12-08T05:34:16.270Z] =================================================================================================================== 00:30:26.151 [2024-12-08T05:34:16.270Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:26.151 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1203020 00:30:26.408 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:26.665 06:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:26.924 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc0cf40-09a5-4dfb-8533-0d975a1a9224 00:30:26.924 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:27.182 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:27.182 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:27.182 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:27.440 [2024-12-08 06:34:17.544625] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:27.700 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc0cf40-09a5-4dfb-8533-0d975a1a9224 00:30:27.700 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:27.700 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc0cf40-09a5-4dfb-8533-0d975a1a9224 00:30:27.700 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:27.700 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:27.700 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:27.700 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:27.700 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:27.700 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:27.700 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:27.700 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:27.700 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc0cf40-09a5-4dfb-8533-0d975a1a9224 00:30:27.961 request: 00:30:27.961 { 00:30:27.961 "uuid": "4cc0cf40-09a5-4dfb-8533-0d975a1a9224", 00:30:27.961 "method": "bdev_lvol_get_lvstores", 00:30:27.961 "req_id": 1 00:30:27.961 } 00:30:27.961 Got JSON-RPC error response 00:30:27.961 response: 00:30:27.961 { 00:30:27.961 "code": -19, 00:30:27.961 "message": "No such device" 00:30:27.961 } 00:30:27.961 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:27.961 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:27.961 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:27.961 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:27.961 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:28.221 aio_bdev 00:30:28.221 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f8425c0f-3157-485a-b525-77d22f6b7134 00:30:28.221 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f8425c0f-3157-485a-b525-77d22f6b7134 00:30:28.221 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:28.221 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:28.221 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:28.221 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:28.221 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:28.480 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f8425c0f-3157-485a-b525-77d22f6b7134 -t 2000 00:30:28.740 [ 00:30:28.740 { 00:30:28.740 "name": "f8425c0f-3157-485a-b525-77d22f6b7134", 00:30:28.740 "aliases": [ 00:30:28.740 "lvs/lvol" 00:30:28.740 ], 00:30:28.740 "product_name": "Logical Volume", 00:30:28.740 "block_size": 4096, 00:30:28.740 "num_blocks": 38912, 00:30:28.740 "uuid": "f8425c0f-3157-485a-b525-77d22f6b7134", 00:30:28.740 "assigned_rate_limits": { 00:30:28.740 "rw_ios_per_sec": 0, 00:30:28.740 "rw_mbytes_per_sec": 0, 00:30:28.740 "r_mbytes_per_sec": 0, 00:30:28.740 "w_mbytes_per_sec": 0 00:30:28.740 }, 00:30:28.740 "claimed": false, 00:30:28.740 "zoned": false, 00:30:28.740 "supported_io_types": { 00:30:28.740 "read": true, 00:30:28.740 "write": true, 00:30:28.740 "unmap": true, 00:30:28.740 "flush": false, 00:30:28.740 "reset": true, 00:30:28.740 "nvme_admin": false, 00:30:28.740 "nvme_io": false, 00:30:28.740 "nvme_io_md": false, 00:30:28.740 "write_zeroes": true, 00:30:28.740 "zcopy": false, 00:30:28.740 "get_zone_info": false, 00:30:28.740 "zone_management": false, 00:30:28.740 "zone_append": false, 00:30:28.740 "compare": false, 00:30:28.740 "compare_and_write": false, 00:30:28.740 "abort": false, 00:30:28.740 "seek_hole": true, 00:30:28.740 "seek_data": true, 00:30:28.740 "copy": false, 00:30:28.740 "nvme_iov_md": false 00:30:28.740 }, 00:30:28.740 "driver_specific": { 00:30:28.740 "lvol": { 00:30:28.740 "lvol_store_uuid": "4cc0cf40-09a5-4dfb-8533-0d975a1a9224", 00:30:28.741 "base_bdev": "aio_bdev", 00:30:28.741 "thin_provision": false, 00:30:28.741 "num_allocated_clusters": 38, 00:30:28.741 "snapshot": false, 00:30:28.741 "clone": false, 00:30:28.741 "esnap_clone": false 00:30:28.741 } 00:30:28.741 } 00:30:28.741 } 00:30:28.741 ] 00:30:28.741 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:28.741 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc0cf40-09a5-4dfb-8533-0d975a1a9224 00:30:28.741 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:29.001 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:29.001 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4cc0cf40-09a5-4dfb-8533-0d975a1a9224 00:30:29.001 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:29.262 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:29.262 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f8425c0f-3157-485a-b525-77d22f6b7134 00:30:29.522 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4cc0cf40-09a5-4dfb-8533-0d975a1a9224 00:30:29.781 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:30.039 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:30.039 00:30:30.039 real 0m17.718s 00:30:30.039 user 0m17.234s 00:30:30.039 sys 0m1.894s 00:30:30.039 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:30.039 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:30.039 ************************************ 00:30:30.039 END TEST lvs_grow_clean 00:30:30.039 ************************************ 00:30:30.039 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:30.039 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:30.039 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:30.039 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:30.039 ************************************ 00:30:30.039 START TEST lvs_grow_dirty 00:30:30.039 ************************************ 00:30:30.039 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:30.298 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:30.298 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:30.298 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:30.298 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:30.298 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:30.298 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:30.298 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:30.298 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:30.298 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:30.556 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:30.556 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:30.821 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=68f3a833-c6c7-4647-be84-2d3742ab4647 00:30:30.821 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68f3a833-c6c7-4647-be84-2d3742ab4647 00:30:30.821 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:31.080 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:31.080 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:31.080 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 68f3a833-c6c7-4647-be84-2d3742ab4647 lvol 150 00:30:31.339 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=661cef65-a695-4bf5-8e5b-e5b05ab7bf02 00:30:31.339 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:31.339 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:31.600 [2024-12-08 06:34:21.560639] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:31.600 [2024-12-08 06:34:21.560802] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:31.600 true 00:30:31.600 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68f3a833-c6c7-4647-be84-2d3742ab4647 00:30:31.600 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:31.860 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:31.860 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:32.119 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 661cef65-a695-4bf5-8e5b-e5b05ab7bf02 00:30:32.377 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:32.636 [2024-12-08 06:34:22.661140] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.636 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:32.895 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1205056 00:30:32.895 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:32.895 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:32.895 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1205056 /var/tmp/bdevperf.sock 00:30:32.895 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1205056 ']' 00:30:32.895 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:32.895 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.895 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:32.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:32.895 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.895 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:32.895 [2024-12-08 06:34:22.983870] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:30:32.895 [2024-12-08 06:34:22.983968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1205056 ] 00:30:33.154 [2024-12-08 06:34:23.053759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.154 [2024-12-08 06:34:23.113360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.154 06:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:33.154 06:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:33.154 06:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:33.725 Nvme0n1 00:30:33.725 06:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:33.985 [ 00:30:33.985 { 00:30:33.985 "name": "Nvme0n1", 00:30:33.985 "aliases": [ 00:30:33.985 "661cef65-a695-4bf5-8e5b-e5b05ab7bf02" 00:30:33.985 ], 00:30:33.985 "product_name": "NVMe disk", 00:30:33.985 "block_size": 4096, 00:30:33.985 "num_blocks": 38912, 00:30:33.985 "uuid": "661cef65-a695-4bf5-8e5b-e5b05ab7bf02", 00:30:33.985 "numa_id": 1, 00:30:33.985 "assigned_rate_limits": { 00:30:33.985 "rw_ios_per_sec": 0, 00:30:33.985 "rw_mbytes_per_sec": 0, 00:30:33.985 "r_mbytes_per_sec": 0, 00:30:33.985 "w_mbytes_per_sec": 0 00:30:33.985 }, 00:30:33.985 "claimed": false, 00:30:33.985 "zoned": false, 00:30:33.985 "supported_io_types": { 00:30:33.985 "read": true, 00:30:33.985 "write": true, 00:30:33.985 "unmap": true, 00:30:33.985 "flush": true, 00:30:33.985 "reset": true, 00:30:33.985 "nvme_admin": true, 00:30:33.985 "nvme_io": true, 00:30:33.985 "nvme_io_md": false, 00:30:33.985 "write_zeroes": true, 00:30:33.985 "zcopy": false, 00:30:33.985 "get_zone_info": false, 00:30:33.985 "zone_management": false, 00:30:33.985 "zone_append": false, 00:30:33.985 "compare": true, 00:30:33.985 "compare_and_write": true, 00:30:33.985 "abort": true, 00:30:33.985 "seek_hole": false, 00:30:33.985 "seek_data": false, 00:30:33.985 "copy": true, 00:30:33.985 "nvme_iov_md": false 00:30:33.985 }, 00:30:33.985 "memory_domains": [ 00:30:33.985 { 00:30:33.985 "dma_device_id": "system", 00:30:33.985 "dma_device_type": 1 00:30:33.985 } 00:30:33.985 ], 00:30:33.985 "driver_specific": { 00:30:33.985 "nvme": [ 00:30:33.985 { 00:30:33.985 "trid": { 00:30:33.985 "trtype": "TCP", 00:30:33.985 "adrfam": "IPv4", 00:30:33.985 "traddr": "10.0.0.2", 00:30:33.985 "trsvcid": "4420", 00:30:33.985 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:33.985 }, 00:30:33.985 "ctrlr_data": { 00:30:33.985 "cntlid": 1, 00:30:33.985 "vendor_id": "0x8086", 00:30:33.985 "model_number": "SPDK bdev Controller", 00:30:33.985 "serial_number": "SPDK0", 00:30:33.985 "firmware_revision": "25.01", 00:30:33.985 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:33.985 "oacs": { 00:30:33.985 "security": 0, 00:30:33.985 "format": 0, 00:30:33.985 "firmware": 0, 00:30:33.985 "ns_manage": 0 00:30:33.985 }, 00:30:33.985 "multi_ctrlr": true, 00:30:33.985 "ana_reporting": false 00:30:33.985 }, 00:30:33.985 "vs": { 00:30:33.985 "nvme_version": "1.3" 00:30:33.985 }, 00:30:33.985 "ns_data": { 00:30:33.985 "id": 1, 00:30:33.985 "can_share": true 00:30:33.985 } 00:30:33.985 } 00:30:33.985 ], 00:30:33.985 "mp_policy": "active_passive" 00:30:33.985 } 00:30:33.985 } 00:30:33.985 ] 00:30:33.985 06:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1205190 00:30:33.985 06:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:33.985 06:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:33.985 Running I/O for 10 seconds... 00:30:34.921 Latency(us) 00:30:34.921 [2024-12-08T05:34:25.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:34.921 Nvme0n1 : 1.00 16256.00 63.50 0.00 0.00 0.00 0.00 0.00 00:30:34.921 [2024-12-08T05:34:25.040Z] =================================================================================================================== 00:30:34.921 [2024-12-08T05:34:25.040Z] Total : 16256.00 63.50 0.00 0.00 0.00 0.00 0.00 00:30:34.921 00:30:35.853 06:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 68f3a833-c6c7-4647-be84-2d3742ab4647 00:30:36.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:36.112 Nvme0n1 : 2.00 16510.00 64.49 0.00 0.00 0.00 0.00 0.00 00:30:36.112 [2024-12-08T05:34:26.231Z] =================================================================================================================== 00:30:36.112 [2024-12-08T05:34:26.231Z] Total : 16510.00 64.49 0.00 0.00 0.00 0.00 0.00 00:30:36.112 00:30:36.112 true 00:30:36.112 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68f3a833-c6c7-4647-be84-2d3742ab4647 00:30:36.112 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:36.679 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:36.679 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:36.679 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1205190 00:30:36.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:36.937 Nvme0n1 : 3.00 16552.33 64.66 0.00 0.00 0.00 0.00 0.00 00:30:36.937 [2024-12-08T05:34:27.056Z] =================================================================================================================== 00:30:36.937 [2024-12-08T05:34:27.056Z] Total : 16552.33 64.66 0.00 0.00 0.00 0.00 0.00 00:30:36.937 00:30:38.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:38.315 Nvme0n1 : 4.00 16668.75 65.11 0.00 0.00 0.00 0.00 0.00 00:30:38.315 [2024-12-08T05:34:28.434Z] =================================================================================================================== 00:30:38.315 [2024-12-08T05:34:28.434Z] Total : 16668.75 65.11 0.00 0.00 0.00 0.00 0.00 00:30:38.315 00:30:38.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:38.881 Nvme0n1 : 5.00 16789.40 65.58 0.00 0.00 0.00 0.00 0.00 00:30:38.881 [2024-12-08T05:34:29.001Z] =================================================================================================================== 00:30:38.882 [2024-12-08T05:34:29.001Z] Total : 16789.40 65.58 0.00 0.00 0.00 0.00 0.00 00:30:38.882 00:30:39.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:39.903 Nvme0n1 : 6.00 16848.67 65.82 0.00 0.00 0.00 0.00 0.00 00:30:39.903 [2024-12-08T05:34:30.022Z] =================================================================================================================== 00:30:39.903 [2024-12-08T05:34:30.022Z] Total : 16848.67 65.82 0.00 0.00 0.00 0.00 0.00 00:30:39.903 00:30:41.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:41.283 Nvme0n1 : 7.00 16800.29 65.63 0.00 0.00 0.00 0.00 0.00 00:30:41.283 [2024-12-08T05:34:31.402Z] =================================================================================================================== 00:30:41.283 [2024-12-08T05:34:31.402Z] Total : 16800.29 65.63 0.00 0.00 0.00 0.00 0.00 00:30:41.283 00:30:42.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:42.220 Nvme0n1 : 8.00 16811.62 65.67 0.00 0.00 0.00 0.00 0.00 00:30:42.220 [2024-12-08T05:34:32.339Z] =================================================================================================================== 00:30:42.220 [2024-12-08T05:34:32.339Z] Total : 16811.62 65.67 0.00 0.00 0.00 0.00 0.00 00:30:42.220 00:30:43.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:43.151 Nvme0n1 : 9.00 16848.67 65.82 0.00 0.00 0.00 0.00 0.00 00:30:43.151 [2024-12-08T05:34:33.270Z] =================================================================================================================== 00:30:43.151 [2024-12-08T05:34:33.270Z] Total : 16848.67 65.82 0.00 0.00 0.00 0.00 0.00 00:30:43.151 00:30:44.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:44.086 Nvme0n1 : 10.00 16880.00 65.94 0.00 0.00 0.00 0.00 0.00 00:30:44.086 [2024-12-08T05:34:34.205Z] =================================================================================================================== 00:30:44.086 [2024-12-08T05:34:34.205Z] Total : 16880.00 65.94 0.00 0.00 0.00 0.00 0.00 00:30:44.086 00:30:44.086 00:30:44.086 Latency(us) 00:30:44.086 [2024-12-08T05:34:34.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:44.086 Nvme0n1 : 10.01 16882.18 65.95 0.00 0.00 7577.80 6990.51 16990.81 00:30:44.086 [2024-12-08T05:34:34.205Z] =================================================================================================================== 00:30:44.086 [2024-12-08T05:34:34.205Z] Total : 16882.18 65.95 0.00 0.00 7577.80 6990.51 16990.81 00:30:44.086 { 00:30:44.086 "results": [ 00:30:44.086 { 00:30:44.086 "job": "Nvme0n1", 00:30:44.086 "core_mask": "0x2", 00:30:44.086 "workload": "randwrite", 00:30:44.086 "status": "finished", 00:30:44.086 "queue_depth": 128, 00:30:44.086 "io_size": 4096, 00:30:44.086 "runtime": 10.005281, 00:30:44.086 "iops": 16882.184518355858, 00:30:44.086 "mibps": 65.94603327482757, 00:30:44.086 "io_failed": 0, 00:30:44.086 "io_timeout": 0, 00:30:44.086 "avg_latency_us": 7577.799914318235, 00:30:44.086 "min_latency_us": 6990.506666666667, 00:30:44.086 "max_latency_us": 16990.814814814814 00:30:44.086 } 00:30:44.086 ], 00:30:44.086 "core_count": 1 00:30:44.086 } 00:30:44.086 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1205056 00:30:44.086 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1205056 ']' 00:30:44.086 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1205056 00:30:44.086 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:44.086 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:44.086 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1205056 00:30:44.086 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:44.086 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:44.086 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1205056' 00:30:44.086 killing process with pid 1205056 00:30:44.086 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1205056 00:30:44.086 Received shutdown signal, test time was about 10.000000 seconds 00:30:44.086 00:30:44.086 Latency(us) 00:30:44.086 [2024-12-08T05:34:34.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.086 [2024-12-08T05:34:34.205Z] =================================================================================================================== 00:30:44.086 [2024-12-08T05:34:34.206Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:44.087 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1205056 00:30:44.344 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:44.602 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:44.860 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68f3a833-c6c7-4647-be84-2d3742ab4647 00:30:44.860 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:45.118 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:45.118 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:45.118 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1202578 00:30:45.118 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1202578 00:30:45.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1202578 Killed "${NVMF_APP[@]}" "$@" 00:30:45.118 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:45.118 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:45.118 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:45.118 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:45.118 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:45.118 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1206513 00:30:45.118 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1206513 00:30:45.118 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:45.118 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1206513 ']' 00:30:45.118 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.118 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.118 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.118 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.118 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:45.118 [2024-12-08 06:34:35.233736] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:45.118 [2024-12-08 06:34:35.234872] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:30:45.118 [2024-12-08 06:34:35.234935] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.378 [2024-12-08 06:34:35.307938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.378 [2024-12-08 06:34:35.363402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.378 [2024-12-08 06:34:35.363461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.378 [2024-12-08 06:34:35.363475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.379 [2024-12-08 06:34:35.363487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.379 [2024-12-08 06:34:35.363497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.379 [2024-12-08 06:34:35.364151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.379 [2024-12-08 06:34:35.450612] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:45.379 [2024-12-08 06:34:35.450908] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:45.637 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.637 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:45.637 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:45.637 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:45.637 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:45.637 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.637 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:45.894 [2024-12-08 06:34:35.799026] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:45.894 [2024-12-08 06:34:35.799189] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:45.894 [2024-12-08 06:34:35.799249] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:45.894 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:45.894 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 661cef65-a695-4bf5-8e5b-e5b05ab7bf02 00:30:45.894 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=661cef65-a695-4bf5-8e5b-e5b05ab7bf02 00:30:45.894 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:45.894 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:45.894 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:45.894 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:45.894 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:46.153 06:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 661cef65-a695-4bf5-8e5b-e5b05ab7bf02 -t 2000 00:30:46.411 [ 00:30:46.411 { 00:30:46.411 "name": "661cef65-a695-4bf5-8e5b-e5b05ab7bf02", 00:30:46.411 "aliases": [ 00:30:46.411 "lvs/lvol" 00:30:46.411 ], 00:30:46.411 "product_name": "Logical Volume", 00:30:46.411 "block_size": 4096, 00:30:46.411 "num_blocks": 38912, 00:30:46.411 "uuid": "661cef65-a695-4bf5-8e5b-e5b05ab7bf02", 00:30:46.411 "assigned_rate_limits": { 00:30:46.411 "rw_ios_per_sec": 0, 00:30:46.411 "rw_mbytes_per_sec": 0, 00:30:46.411 "r_mbytes_per_sec": 0, 00:30:46.411 "w_mbytes_per_sec": 0 00:30:46.411 }, 00:30:46.411 "claimed": false, 00:30:46.411 "zoned": false, 00:30:46.411 "supported_io_types": { 00:30:46.411 "read": true, 00:30:46.411 "write": true, 00:30:46.411 "unmap": true, 00:30:46.411 "flush": false, 00:30:46.411 "reset": true, 00:30:46.411 "nvme_admin": false, 00:30:46.411 "nvme_io": false, 00:30:46.411 "nvme_io_md": false, 00:30:46.411 "write_zeroes": true, 00:30:46.411 "zcopy": false, 00:30:46.411 "get_zone_info": false, 00:30:46.411 "zone_management": false, 00:30:46.411 "zone_append": false, 00:30:46.411 "compare": false, 00:30:46.411 "compare_and_write": false, 00:30:46.411 "abort": false, 00:30:46.411 "seek_hole": true, 00:30:46.411 "seek_data": true, 00:30:46.411 "copy": false, 00:30:46.411 "nvme_iov_md": false 00:30:46.411 }, 00:30:46.411 "driver_specific": { 00:30:46.411 "lvol": { 00:30:46.411 "lvol_store_uuid": "68f3a833-c6c7-4647-be84-2d3742ab4647", 00:30:46.411 "base_bdev": "aio_bdev", 00:30:46.411 "thin_provision": false, 00:30:46.411 "num_allocated_clusters": 38, 00:30:46.411 "snapshot": false, 00:30:46.411 "clone": false, 00:30:46.411 "esnap_clone": false 00:30:46.411 } 00:30:46.411 } 00:30:46.411 } 00:30:46.411 ] 00:30:46.411 06:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:46.411 06:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68f3a833-c6c7-4647-be84-2d3742ab4647 00:30:46.411 06:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:46.669 06:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:46.669 06:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68f3a833-c6c7-4647-be84-2d3742ab4647 00:30:46.669 06:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:46.926 06:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:46.926 06:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:47.184 [2024-12-08 06:34:37.220665] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:47.184 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68f3a833-c6c7-4647-be84-2d3742ab4647 00:30:47.184 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:47.184 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68f3a833-c6c7-4647-be84-2d3742ab4647 00:30:47.184 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:47.184 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:47.184 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:47.184 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:47.184 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:47.184 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:47.184 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:47.184 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:47.184 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68f3a833-c6c7-4647-be84-2d3742ab4647 00:30:47.444 request: 00:30:47.444 { 00:30:47.444 "uuid": "68f3a833-c6c7-4647-be84-2d3742ab4647", 00:30:47.444 "method": "bdev_lvol_get_lvstores", 00:30:47.444 "req_id": 1 00:30:47.444 } 00:30:47.444 Got JSON-RPC error response 00:30:47.444 response: 00:30:47.444 { 00:30:47.444 "code": -19, 00:30:47.444 "message": "No such device" 00:30:47.444 } 00:30:47.444 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:47.444 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:47.444 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:47.444 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:47.444 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:47.704 aio_bdev 00:30:47.704 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 661cef65-a695-4bf5-8e5b-e5b05ab7bf02 00:30:47.704 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=661cef65-a695-4bf5-8e5b-e5b05ab7bf02 00:30:47.704 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:47.704 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:47.704 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:47.704 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:47.704 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:47.964 06:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 661cef65-a695-4bf5-8e5b-e5b05ab7bf02 -t 2000 00:30:48.224 [ 00:30:48.224 { 00:30:48.224 "name": "661cef65-a695-4bf5-8e5b-e5b05ab7bf02", 00:30:48.224 "aliases": [ 00:30:48.224 "lvs/lvol" 00:30:48.224 ], 00:30:48.224 "product_name": "Logical Volume", 00:30:48.224 "block_size": 4096, 00:30:48.224 "num_blocks": 38912, 00:30:48.224 "uuid": "661cef65-a695-4bf5-8e5b-e5b05ab7bf02", 00:30:48.224 "assigned_rate_limits": { 00:30:48.224 "rw_ios_per_sec": 0, 00:30:48.224 "rw_mbytes_per_sec": 0, 00:30:48.224 "r_mbytes_per_sec": 0, 00:30:48.224 "w_mbytes_per_sec": 0 00:30:48.224 }, 00:30:48.224 "claimed": false, 00:30:48.224 "zoned": false, 00:30:48.224 "supported_io_types": { 00:30:48.224 "read": true, 00:30:48.224 "write": true, 00:30:48.224 "unmap": true, 00:30:48.224 "flush": false, 00:30:48.224 "reset": true, 00:30:48.224 "nvme_admin": false, 00:30:48.224 "nvme_io": false, 00:30:48.224 "nvme_io_md": false, 00:30:48.224 "write_zeroes": true, 00:30:48.224 "zcopy": false, 00:30:48.224 "get_zone_info": false, 00:30:48.224 "zone_management": false, 00:30:48.224 "zone_append": false, 00:30:48.224 "compare": false, 00:30:48.224 "compare_and_write": false, 00:30:48.224 "abort": false, 00:30:48.224 "seek_hole": true, 00:30:48.224 "seek_data": true, 00:30:48.224 "copy": false, 00:30:48.224 "nvme_iov_md": false 00:30:48.224 }, 00:30:48.224 "driver_specific": { 00:30:48.224 "lvol": { 00:30:48.224 "lvol_store_uuid": "68f3a833-c6c7-4647-be84-2d3742ab4647", 00:30:48.224 "base_bdev": "aio_bdev", 00:30:48.224 "thin_provision": false, 00:30:48.224 "num_allocated_clusters": 38, 00:30:48.224 "snapshot": false, 00:30:48.224 "clone": false, 00:30:48.224 "esnap_clone": false 00:30:48.224 } 00:30:48.224 } 00:30:48.224 } 00:30:48.224 ] 00:30:48.482 06:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:48.482 06:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68f3a833-c6c7-4647-be84-2d3742ab4647 00:30:48.482 06:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:48.740 06:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:48.740 06:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68f3a833-c6c7-4647-be84-2d3742ab4647 00:30:48.740 06:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:48.999 06:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:48.999 06:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 661cef65-a695-4bf5-8e5b-e5b05ab7bf02 00:30:49.257 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 68f3a833-c6c7-4647-be84-2d3742ab4647 00:30:49.515 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:49.774 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:49.774 00:30:49.774 real 0m19.623s 00:30:49.774 user 0m36.327s 00:30:49.775 sys 0m5.031s 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:49.775 ************************************ 00:30:49.775 END TEST lvs_grow_dirty 00:30:49.775 ************************************ 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:49.775 nvmf_trace.0 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:49.775 rmmod nvme_tcp 00:30:49.775 rmmod nvme_fabrics 00:30:49.775 rmmod nvme_keyring 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1206513 ']' 00:30:49.775 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1206513 00:30:50.036 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1206513 ']' 00:30:50.036 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1206513 00:30:50.036 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:50.036 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:50.036 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1206513 00:30:50.036 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:50.036 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:50.036 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1206513' 00:30:50.036 killing process with pid 1206513 00:30:50.036 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1206513 00:30:50.036 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1206513 00:30:50.296 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:50.296 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:50.296 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:50.296 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:50.296 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:50.296 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:50.296 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:50.296 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:50.296 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:50.296 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.296 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.296 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.203 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:52.203 00:30:52.203 real 0m42.973s 00:30:52.203 user 0m55.443s 00:30:52.203 sys 0m8.971s 00:30:52.203 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:52.203 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:52.203 ************************************ 00:30:52.203 END TEST nvmf_lvs_grow 00:30:52.203 ************************************ 00:30:52.203 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:52.203 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:52.203 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:52.203 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:52.203 ************************************ 00:30:52.203 START TEST nvmf_bdev_io_wait 00:30:52.203 ************************************ 00:30:52.203 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:52.462 * Looking for test storage... 00:30:52.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:52.462 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:52.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.463 --rc genhtml_branch_coverage=1 00:30:52.463 --rc genhtml_function_coverage=1 00:30:52.463 --rc genhtml_legend=1 00:30:52.463 --rc geninfo_all_blocks=1 00:30:52.463 --rc geninfo_unexecuted_blocks=1 00:30:52.463 00:30:52.463 ' 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:52.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.463 --rc genhtml_branch_coverage=1 00:30:52.463 --rc genhtml_function_coverage=1 00:30:52.463 --rc genhtml_legend=1 00:30:52.463 --rc geninfo_all_blocks=1 00:30:52.463 --rc geninfo_unexecuted_blocks=1 00:30:52.463 00:30:52.463 ' 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:52.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.463 --rc genhtml_branch_coverage=1 00:30:52.463 --rc genhtml_function_coverage=1 00:30:52.463 --rc genhtml_legend=1 00:30:52.463 --rc geninfo_all_blocks=1 00:30:52.463 --rc geninfo_unexecuted_blocks=1 00:30:52.463 00:30:52.463 ' 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:52.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.463 --rc genhtml_branch_coverage=1 00:30:52.463 --rc genhtml_function_coverage=1 00:30:52.463 --rc genhtml_legend=1 00:30:52.463 --rc geninfo_all_blocks=1 00:30:52.463 --rc geninfo_unexecuted_blocks=1 00:30:52.463 00:30:52.463 ' 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:52.463 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:52.464 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:55.000 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:55.000 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:55.000 Found net devices under 0000:84:00.0: cvl_0_0 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:55.000 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:55.001 Found net devices under 0000:84:00.1: cvl_0_1 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:55.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:55.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:30:55.001 00:30:55.001 --- 10.0.0.2 ping statistics --- 00:30:55.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.001 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:55.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:55.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:30:55.001 00:30:55.001 --- 10.0.0.1 ping statistics --- 00:30:55.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.001 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1209060 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1209060 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1209060 ']' 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:55.001 06:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:55.001 [2024-12-08 06:34:44.771943] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:55.001 [2024-12-08 06:34:44.773001] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:30:55.001 [2024-12-08 06:34:44.773061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:55.001 [2024-12-08 06:34:44.847235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:55.001 [2024-12-08 06:34:44.908791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:55.001 [2024-12-08 06:34:44.908849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:55.001 [2024-12-08 06:34:44.908883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:55.001 [2024-12-08 06:34:44.908896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:55.002 [2024-12-08 06:34:44.908906] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:55.002 [2024-12-08 06:34:44.910747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.002 [2024-12-08 06:34:44.910806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:55.002 [2024-12-08 06:34:44.910811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.002 [2024-12-08 06:34:44.910774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:55.002 [2024-12-08 06:34:44.911324] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:55.002 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:55.002 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:55.002 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:55.002 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:55.002 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:55.002 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:55.002 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:55.002 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.002 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:55.002 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.002 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:55.002 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.002 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:55.002 [2024-12-08 06:34:45.112953] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:55.002 [2024-12-08 06:34:45.113146] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:55.002 [2024-12-08 06:34:45.113994] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:55.002 [2024-12-08 06:34:45.114833] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:55.002 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.002 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:55.002 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.002 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:55.259 [2024-12-08 06:34:45.119508] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.259 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.259 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:55.259 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:55.260 Malloc0 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:55.260 [2024-12-08 06:34:45.171754] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1209202 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1209204 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:55.260 { 00:30:55.260 "params": { 00:30:55.260 "name": "Nvme$subsystem", 00:30:55.260 "trtype": "$TEST_TRANSPORT", 00:30:55.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.260 "adrfam": "ipv4", 00:30:55.260 "trsvcid": "$NVMF_PORT", 00:30:55.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.260 "hdgst": ${hdgst:-false}, 00:30:55.260 "ddgst": ${ddgst:-false} 00:30:55.260 }, 00:30:55.260 "method": "bdev_nvme_attach_controller" 00:30:55.260 } 00:30:55.260 EOF 00:30:55.260 )") 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1209206 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:55.260 { 00:30:55.260 "params": { 00:30:55.260 "name": "Nvme$subsystem", 00:30:55.260 "trtype": "$TEST_TRANSPORT", 00:30:55.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.260 "adrfam": "ipv4", 00:30:55.260 "trsvcid": "$NVMF_PORT", 00:30:55.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.260 "hdgst": ${hdgst:-false}, 00:30:55.260 "ddgst": ${ddgst:-false} 00:30:55.260 }, 00:30:55.260 "method": "bdev_nvme_attach_controller" 00:30:55.260 } 00:30:55.260 EOF 00:30:55.260 )") 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1209209 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:55.260 { 00:30:55.260 "params": { 00:30:55.260 "name": "Nvme$subsystem", 00:30:55.260 "trtype": "$TEST_TRANSPORT", 00:30:55.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.260 "adrfam": "ipv4", 00:30:55.260 "trsvcid": "$NVMF_PORT", 00:30:55.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.260 "hdgst": ${hdgst:-false}, 00:30:55.260 "ddgst": ${ddgst:-false} 00:30:55.260 }, 00:30:55.260 "method": "bdev_nvme_attach_controller" 00:30:55.260 } 00:30:55.260 EOF 00:30:55.260 )") 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:55.260 { 00:30:55.260 "params": { 00:30:55.260 "name": "Nvme$subsystem", 00:30:55.260 "trtype": "$TEST_TRANSPORT", 00:30:55.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.260 "adrfam": "ipv4", 00:30:55.260 "trsvcid": "$NVMF_PORT", 00:30:55.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.260 "hdgst": ${hdgst:-false}, 00:30:55.260 "ddgst": ${ddgst:-false} 00:30:55.260 }, 00:30:55.260 "method": "bdev_nvme_attach_controller" 00:30:55.260 } 00:30:55.260 EOF 00:30:55.260 )") 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1209202 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:55.260 "params": { 00:30:55.260 "name": "Nvme1", 00:30:55.260 "trtype": "tcp", 00:30:55.260 "traddr": "10.0.0.2", 00:30:55.260 "adrfam": "ipv4", 00:30:55.260 "trsvcid": "4420", 00:30:55.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:55.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:55.260 "hdgst": false, 00:30:55.260 "ddgst": false 00:30:55.260 }, 00:30:55.260 "method": "bdev_nvme_attach_controller" 00:30:55.260 }' 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:55.260 "params": { 00:30:55.260 "name": "Nvme1", 00:30:55.260 "trtype": "tcp", 00:30:55.260 "traddr": "10.0.0.2", 00:30:55.260 "adrfam": "ipv4", 00:30:55.260 "trsvcid": "4420", 00:30:55.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:55.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:55.260 "hdgst": false, 00:30:55.260 "ddgst": false 00:30:55.260 }, 00:30:55.260 "method": "bdev_nvme_attach_controller" 00:30:55.260 }' 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:55.260 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:55.260 "params": { 00:30:55.261 "name": "Nvme1", 00:30:55.261 "trtype": "tcp", 00:30:55.261 "traddr": "10.0.0.2", 00:30:55.261 "adrfam": "ipv4", 00:30:55.261 "trsvcid": "4420", 00:30:55.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:55.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:55.261 "hdgst": false, 00:30:55.261 "ddgst": false 00:30:55.261 }, 00:30:55.261 "method": "bdev_nvme_attach_controller" 00:30:55.261 }' 00:30:55.261 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:55.261 06:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:55.261 "params": { 00:30:55.261 "name": "Nvme1", 00:30:55.261 "trtype": "tcp", 00:30:55.261 "traddr": "10.0.0.2", 00:30:55.261 "adrfam": "ipv4", 00:30:55.261 "trsvcid": "4420", 00:30:55.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:55.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:55.261 "hdgst": false, 00:30:55.261 "ddgst": false 00:30:55.261 }, 00:30:55.261 "method": "bdev_nvme_attach_controller" 00:30:55.261 }' 00:30:55.261 [2024-12-08 06:34:45.221535] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:30:55.261 [2024-12-08 06:34:45.221535] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:30:55.261 [2024-12-08 06:34:45.221535] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:30:55.261 [2024-12-08 06:34:45.221616] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-08 06:34:45.221617] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-08 06:34:45.221616] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:55.261 --proc-type=auto ] 00:30:55.261 --proc-type=auto ] 00:30:55.261 [2024-12-08 06:34:45.222613] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:30:55.261 [2024-12-08 06:34:45.222690] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:55.519 [2024-12-08 06:34:45.402703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.519 [2024-12-08 06:34:45.456459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:55.519 [2024-12-08 06:34:45.501218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.519 [2024-12-08 06:34:45.556290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:55.519 [2024-12-08 06:34:45.613419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.779 [2024-12-08 06:34:45.676509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.779 [2024-12-08 06:34:45.687981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:55.779 [2024-12-08 06:34:45.728821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:55.779 Running I/O for 1 seconds... 00:30:55.779 Running I/O for 1 seconds... 00:30:56.037 Running I/O for 1 seconds... 00:30:56.037 Running I/O for 1 seconds... 00:30:56.972 7252.00 IOPS, 28.33 MiB/s [2024-12-08T05:34:47.091Z] 189288.00 IOPS, 739.41 MiB/s 00:30:56.972 Latency(us) 00:30:56.972 [2024-12-08T05:34:47.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.973 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:56.973 Nvme1n1 : 1.00 188936.27 738.03 0.00 0.00 673.80 279.13 1832.58 00:30:56.973 [2024-12-08T05:34:47.092Z] =================================================================================================================== 00:30:56.973 [2024-12-08T05:34:47.092Z] Total : 188936.27 738.03 0.00 0.00 673.80 279.13 1832.58 00:30:56.973 00:30:56.973 Latency(us) 00:30:56.973 [2024-12-08T05:34:47.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.973 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:56.973 Nvme1n1 : 1.02 7251.80 28.33 0.00 0.00 17468.12 4393.34 33787.45 00:30:56.973 [2024-12-08T05:34:47.092Z] =================================================================================================================== 00:30:56.973 [2024-12-08T05:34:47.092Z] Total : 7251.80 28.33 0.00 0.00 17468.12 4393.34 33787.45 00:30:56.973 6734.00 IOPS, 26.30 MiB/s 00:30:56.973 Latency(us) 00:30:56.973 [2024-12-08T05:34:47.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.973 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:56.973 Nvme1n1 : 1.01 6835.14 26.70 0.00 0.00 18662.60 5461.33 36117.62 00:30:56.973 [2024-12-08T05:34:47.092Z] =================================================================================================================== 00:30:56.973 [2024-12-08T05:34:47.092Z] Total : 6835.14 26.70 0.00 0.00 18662.60 5461.33 36117.62 00:30:56.973 10177.00 IOPS, 39.75 MiB/s 00:30:56.973 Latency(us) 00:30:56.973 [2024-12-08T05:34:47.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.973 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:56.973 Nvme1n1 : 1.01 10257.13 40.07 0.00 0.00 12440.49 3932.16 18058.81 00:30:56.973 [2024-12-08T05:34:47.092Z] =================================================================================================================== 00:30:56.973 [2024-12-08T05:34:47.092Z] Total : 10257.13 40.07 0.00 0.00 12440.49 3932.16 18058.81 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1209204 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1209206 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1209209 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:57.231 rmmod nvme_tcp 00:30:57.231 rmmod nvme_fabrics 00:30:57.231 rmmod nvme_keyring 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1209060 ']' 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1209060 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1209060 ']' 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1209060 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1209060 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1209060' 00:30:57.231 killing process with pid 1209060 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1209060 00:30:57.231 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1209060 00:30:57.489 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:57.490 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:57.490 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:57.490 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:57.490 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:57.490 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:57.490 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:57.490 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:57.490 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:57.490 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.490 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.490 06:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:00.036 00:31:00.036 real 0m7.261s 00:31:00.036 user 0m14.630s 00:31:00.036 sys 0m4.055s 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:00.036 ************************************ 00:31:00.036 END TEST nvmf_bdev_io_wait 00:31:00.036 ************************************ 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:00.036 ************************************ 00:31:00.036 START TEST nvmf_queue_depth 00:31:00.036 ************************************ 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:00.036 * Looking for test storage... 00:31:00.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:00.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.036 --rc genhtml_branch_coverage=1 00:31:00.036 --rc genhtml_function_coverage=1 00:31:00.036 --rc genhtml_legend=1 00:31:00.036 --rc geninfo_all_blocks=1 00:31:00.036 --rc geninfo_unexecuted_blocks=1 00:31:00.036 00:31:00.036 ' 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:00.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.036 --rc genhtml_branch_coverage=1 00:31:00.036 --rc genhtml_function_coverage=1 00:31:00.036 --rc genhtml_legend=1 00:31:00.036 --rc geninfo_all_blocks=1 00:31:00.036 --rc geninfo_unexecuted_blocks=1 00:31:00.036 00:31:00.036 ' 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:00.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.036 --rc genhtml_branch_coverage=1 00:31:00.036 --rc genhtml_function_coverage=1 00:31:00.036 --rc genhtml_legend=1 00:31:00.036 --rc geninfo_all_blocks=1 00:31:00.036 --rc geninfo_unexecuted_blocks=1 00:31:00.036 00:31:00.036 ' 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:00.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.036 --rc genhtml_branch_coverage=1 00:31:00.036 --rc genhtml_function_coverage=1 00:31:00.036 --rc genhtml_legend=1 00:31:00.036 --rc geninfo_all_blocks=1 00:31:00.036 --rc geninfo_unexecuted_blocks=1 00:31:00.036 00:31:00.036 ' 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.036 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:00.037 06:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.936 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:01.937 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:01.937 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:01.937 Found net devices under 0000:84:00.0: cvl_0_0 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:01.937 Found net devices under 0000:84:00.1: cvl_0_1 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:01.937 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.937 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.937 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.937 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:01.937 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:01.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:31:01.937 00:31:01.937 --- 10.0.0.2 ping statistics --- 00:31:01.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.937 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:31:01.937 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:01.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:31:01.937 00:31:01.937 --- 10.0.0.1 ping statistics --- 00:31:01.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.937 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:31:01.937 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.937 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:31:01.937 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:01.937 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.937 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:01.937 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:01.937 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.937 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:01.937 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:01.937 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:01.937 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:01.937 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:01.938 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.938 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1211443 00:31:01.938 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:01.938 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1211443 00:31:01.938 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1211443 ']' 00:31:01.938 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.938 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.938 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.938 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.938 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:02.197 [2024-12-08 06:34:52.093244] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:02.197 [2024-12-08 06:34:52.094361] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:31:02.197 [2024-12-08 06:34:52.094442] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.197 [2024-12-08 06:34:52.172285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.197 [2024-12-08 06:34:52.230614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:02.197 [2024-12-08 06:34:52.230680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:02.197 [2024-12-08 06:34:52.230717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:02.197 [2024-12-08 06:34:52.230739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:02.197 [2024-12-08 06:34:52.230749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:02.197 [2024-12-08 06:34:52.231466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.455 [2024-12-08 06:34:52.329328] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:02.455 [2024-12-08 06:34:52.329640] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:02.455 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.455 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:02.455 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:02.455 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:02.455 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:02.455 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:02.455 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:02.455 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.455 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:02.455 [2024-12-08 06:34:52.380134] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.455 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.455 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:02.456 Malloc0 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:02.456 [2024-12-08 06:34:52.444314] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1211465 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1211465 /var/tmp/bdevperf.sock 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1211465 ']' 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:02.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:02.456 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:02.456 [2024-12-08 06:34:52.490777] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:31:02.456 [2024-12-08 06:34:52.490858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1211465 ] 00:31:02.456 [2024-12-08 06:34:52.556364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.716 [2024-12-08 06:34:52.613168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.716 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.716 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:02.716 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:02.716 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.716 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:02.974 NVMe0n1 00:31:02.974 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.974 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:02.974 Running I/O for 10 seconds... 00:31:05.287 8813.00 IOPS, 34.43 MiB/s [2024-12-08T05:34:56.337Z] 9197.50 IOPS, 35.93 MiB/s [2024-12-08T05:34:57.275Z] 9218.33 IOPS, 36.01 MiB/s [2024-12-08T05:34:58.208Z] 9272.75 IOPS, 36.22 MiB/s [2024-12-08T05:34:59.145Z] 9396.40 IOPS, 36.70 MiB/s [2024-12-08T05:35:00.084Z] 9388.67 IOPS, 36.67 MiB/s [2024-12-08T05:35:01.462Z] 9366.29 IOPS, 36.59 MiB/s [2024-12-08T05:35:02.422Z] 9348.75 IOPS, 36.52 MiB/s [2024-12-08T05:35:03.384Z] 9423.11 IOPS, 36.81 MiB/s [2024-12-08T05:35:03.384Z] 9426.30 IOPS, 36.82 MiB/s 00:31:13.265 Latency(us) 00:31:13.265 [2024-12-08T05:35:03.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.265 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:13.265 Verification LBA range: start 0x0 length 0x4000 00:31:13.265 NVMe0n1 : 10.07 9462.72 36.96 0.00 0.00 107805.40 17670.45 76118.85 00:31:13.265 [2024-12-08T05:35:03.384Z] =================================================================================================================== 00:31:13.265 [2024-12-08T05:35:03.384Z] Total : 9462.72 36.96 0.00 0.00 107805.40 17670.45 76118.85 00:31:13.265 { 00:31:13.265 "results": [ 00:31:13.265 { 00:31:13.265 "job": "NVMe0n1", 00:31:13.265 "core_mask": "0x1", 00:31:13.265 "workload": "verify", 00:31:13.265 "status": "finished", 00:31:13.265 "verify_range": { 00:31:13.265 "start": 0, 00:31:13.265 "length": 16384 00:31:13.265 }, 00:31:13.265 "queue_depth": 1024, 00:31:13.265 "io_size": 4096, 00:31:13.265 "runtime": 10.068988, 00:31:13.265 "iops": 9462.718596943407, 00:31:13.265 "mibps": 36.96374451931018, 00:31:13.265 "io_failed": 0, 00:31:13.265 "io_timeout": 0, 00:31:13.265 "avg_latency_us": 107805.395808813, 00:31:13.265 "min_latency_us": 17670.447407407406, 00:31:13.266 "max_latency_us": 76118.85037037038 00:31:13.266 } 00:31:13.266 ], 00:31:13.266 "core_count": 1 00:31:13.266 } 00:31:13.266 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1211465 00:31:13.266 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1211465 ']' 00:31:13.266 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1211465 00:31:13.266 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:13.266 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.266 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1211465 00:31:13.266 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:13.266 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:13.266 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1211465' 00:31:13.266 killing process with pid 1211465 00:31:13.266 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1211465 00:31:13.266 Received shutdown signal, test time was about 10.000000 seconds 00:31:13.266 00:31:13.266 Latency(us) 00:31:13.266 [2024-12-08T05:35:03.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.266 [2024-12-08T05:35:03.385Z] =================================================================================================================== 00:31:13.266 [2024-12-08T05:35:03.385Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:13.266 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1211465 00:31:13.266 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:13.266 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:13.266 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:13.266 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:13.266 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:13.266 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:13.266 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:13.524 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:13.524 rmmod nvme_tcp 00:31:13.524 rmmod nvme_fabrics 00:31:13.524 rmmod nvme_keyring 00:31:13.524 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:13.524 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:13.524 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:13.524 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1211443 ']' 00:31:13.524 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1211443 00:31:13.524 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1211443 ']' 00:31:13.524 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1211443 00:31:13.524 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:13.524 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.524 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1211443 00:31:13.524 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:13.524 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:13.525 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1211443' 00:31:13.525 killing process with pid 1211443 00:31:13.525 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1211443 00:31:13.525 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1211443 00:31:13.783 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:13.783 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:13.783 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:13.783 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:13.783 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:13.783 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:13.783 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:13.783 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:13.783 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:13.783 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.783 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.783 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.688 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:15.688 00:31:15.688 real 0m16.149s 00:31:15.688 user 0m21.988s 00:31:15.688 sys 0m3.771s 00:31:15.688 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:15.688 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:15.688 ************************************ 00:31:15.688 END TEST nvmf_queue_depth 00:31:15.688 ************************************ 00:31:15.688 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:15.688 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:15.688 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:15.688 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:15.688 ************************************ 00:31:15.688 START TEST nvmf_target_multipath 00:31:15.688 ************************************ 00:31:15.688 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:15.947 * Looking for test storage... 00:31:15.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:15.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.947 --rc genhtml_branch_coverage=1 00:31:15.947 --rc genhtml_function_coverage=1 00:31:15.947 --rc genhtml_legend=1 00:31:15.947 --rc geninfo_all_blocks=1 00:31:15.947 --rc geninfo_unexecuted_blocks=1 00:31:15.947 00:31:15.947 ' 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:15.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.947 --rc genhtml_branch_coverage=1 00:31:15.947 --rc genhtml_function_coverage=1 00:31:15.947 --rc genhtml_legend=1 00:31:15.947 --rc geninfo_all_blocks=1 00:31:15.947 --rc geninfo_unexecuted_blocks=1 00:31:15.947 00:31:15.947 ' 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:15.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.947 --rc genhtml_branch_coverage=1 00:31:15.947 --rc genhtml_function_coverage=1 00:31:15.947 --rc genhtml_legend=1 00:31:15.947 --rc geninfo_all_blocks=1 00:31:15.947 --rc geninfo_unexecuted_blocks=1 00:31:15.947 00:31:15.947 ' 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:15.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.947 --rc genhtml_branch_coverage=1 00:31:15.947 --rc genhtml_function_coverage=1 00:31:15.947 --rc genhtml_legend=1 00:31:15.947 --rc geninfo_all_blocks=1 00:31:15.947 --rc geninfo_unexecuted_blocks=1 00:31:15.947 00:31:15.947 ' 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.947 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:15.948 06:35:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:18.482 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:18.482 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:18.483 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:18.483 Found net devices under 0000:84:00.0: cvl_0_0 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:18.483 Found net devices under 0000:84:00.1: cvl_0_1 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:18.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:18.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:31:18.483 00:31:18.483 --- 10.0.0.2 ping statistics --- 00:31:18.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:18.483 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:18.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:18.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:31:18.483 00:31:18.483 --- 10.0.0.1 ping statistics --- 00:31:18.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:18.483 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:18.483 only one NIC for nvmf test 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:18.483 rmmod nvme_tcp 00:31:18.483 rmmod nvme_fabrics 00:31:18.483 rmmod nvme_keyring 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:18.483 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:18.484 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:18.484 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:18.484 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:18.484 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:18.484 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:18.484 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:18.484 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.484 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.484 06:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:20.386 00:31:20.386 real 0m4.539s 00:31:20.386 user 0m0.935s 00:31:20.386 sys 0m1.596s 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:20.386 ************************************ 00:31:20.386 END TEST nvmf_target_multipath 00:31:20.386 ************************************ 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:20.386 ************************************ 00:31:20.386 START TEST nvmf_zcopy 00:31:20.386 ************************************ 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:20.386 * Looking for test storage... 00:31:20.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:20.386 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:20.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.647 --rc genhtml_branch_coverage=1 00:31:20.647 --rc genhtml_function_coverage=1 00:31:20.647 --rc genhtml_legend=1 00:31:20.647 --rc geninfo_all_blocks=1 00:31:20.647 --rc geninfo_unexecuted_blocks=1 00:31:20.647 00:31:20.647 ' 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:20.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.647 --rc genhtml_branch_coverage=1 00:31:20.647 --rc genhtml_function_coverage=1 00:31:20.647 --rc genhtml_legend=1 00:31:20.647 --rc geninfo_all_blocks=1 00:31:20.647 --rc geninfo_unexecuted_blocks=1 00:31:20.647 00:31:20.647 ' 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:20.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.647 --rc genhtml_branch_coverage=1 00:31:20.647 --rc genhtml_function_coverage=1 00:31:20.647 --rc genhtml_legend=1 00:31:20.647 --rc geninfo_all_blocks=1 00:31:20.647 --rc geninfo_unexecuted_blocks=1 00:31:20.647 00:31:20.647 ' 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:20.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.647 --rc genhtml_branch_coverage=1 00:31:20.647 --rc genhtml_function_coverage=1 00:31:20.647 --rc genhtml_legend=1 00:31:20.647 --rc geninfo_all_blocks=1 00:31:20.647 --rc geninfo_unexecuted_blocks=1 00:31:20.647 00:31:20.647 ' 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:20.647 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:20.648 06:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:22.550 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:22.551 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:22.551 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:22.551 Found net devices under 0000:84:00.0: cvl_0_0 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:22.551 Found net devices under 0000:84:00.1: cvl_0_1 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:22.551 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:22.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:22.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:31:22.810 00:31:22.810 --- 10.0.0.2 ping statistics --- 00:31:22.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.810 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:22.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:22.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:31:22.810 00:31:22.810 --- 10.0.0.1 ping statistics --- 00:31:22.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.810 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1216679 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1216679 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1216679 ']' 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:22.810 06:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:22.810 [2024-12-08 06:35:12.859901] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:22.810 [2024-12-08 06:35:12.861004] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:31:22.810 [2024-12-08 06:35:12.861073] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.070 [2024-12-08 06:35:12.931242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.070 [2024-12-08 06:35:12.986678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:23.070 [2024-12-08 06:35:12.986766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:23.070 [2024-12-08 06:35:12.986802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:23.070 [2024-12-08 06:35:12.986814] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:23.070 [2024-12-08 06:35:12.986840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:23.070 [2024-12-08 06:35:12.987483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.070 [2024-12-08 06:35:13.068426] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:23.070 [2024-12-08 06:35:13.068772] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:23.070 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:23.070 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:31:23.070 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:23.070 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:23.070 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.070 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.070 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:23.070 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:23.070 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.070 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.070 [2024-12-08 06:35:13.124065] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.070 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.071 [2024-12-08 06:35:13.140270] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.071 malloc0 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:23.071 { 00:31:23.071 "params": { 00:31:23.071 "name": "Nvme$subsystem", 00:31:23.071 "trtype": "$TEST_TRANSPORT", 00:31:23.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.071 "adrfam": "ipv4", 00:31:23.071 "trsvcid": "$NVMF_PORT", 00:31:23.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.071 "hdgst": ${hdgst:-false}, 00:31:23.071 "ddgst": ${ddgst:-false} 00:31:23.071 }, 00:31:23.071 "method": "bdev_nvme_attach_controller" 00:31:23.071 } 00:31:23.071 EOF 00:31:23.071 )") 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:23.071 06:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:23.071 "params": { 00:31:23.071 "name": "Nvme1", 00:31:23.071 "trtype": "tcp", 00:31:23.071 "traddr": "10.0.0.2", 00:31:23.071 "adrfam": "ipv4", 00:31:23.071 "trsvcid": "4420", 00:31:23.071 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:23.071 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:23.071 "hdgst": false, 00:31:23.071 "ddgst": false 00:31:23.071 }, 00:31:23.071 "method": "bdev_nvme_attach_controller" 00:31:23.071 }' 00:31:23.330 [2024-12-08 06:35:13.227745] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:31:23.330 [2024-12-08 06:35:13.227827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216706 ] 00:31:23.330 [2024-12-08 06:35:13.299616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.330 [2024-12-08 06:35:13.359728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.589 Running I/O for 10 seconds... 00:31:25.467 5994.00 IOPS, 46.83 MiB/s [2024-12-08T05:35:16.966Z] 6137.50 IOPS, 47.95 MiB/s [2024-12-08T05:35:17.905Z] 6107.00 IOPS, 47.71 MiB/s [2024-12-08T05:35:18.846Z] 6154.50 IOPS, 48.08 MiB/s [2024-12-08T05:35:19.786Z] 6140.40 IOPS, 47.97 MiB/s [2024-12-08T05:35:20.727Z] 6161.00 IOPS, 48.13 MiB/s [2024-12-08T05:35:21.665Z] 6141.71 IOPS, 47.98 MiB/s [2024-12-08T05:35:22.599Z] 6155.75 IOPS, 48.09 MiB/s [2024-12-08T05:35:23.978Z] 6175.33 IOPS, 48.24 MiB/s [2024-12-08T05:35:23.978Z] 6187.80 IOPS, 48.34 MiB/s 00:31:33.859 Latency(us) 00:31:33.859 [2024-12-08T05:35:23.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:33.859 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:33.859 Verification LBA range: start 0x0 length 0x1000 00:31:33.859 Nvme1n1 : 10.02 6190.77 48.37 0.00 0.00 20623.42 2281.62 27962.03 00:31:33.859 [2024-12-08T05:35:23.978Z] =================================================================================================================== 00:31:33.859 [2024-12-08T05:35:23.978Z] Total : 6190.77 48.37 0.00 0.00 20623.42 2281.62 27962.03 00:31:33.859 06:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1217885 00:31:33.859 06:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:33.859 06:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:33.859 06:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:33.859 06:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:33.859 06:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:33.859 06:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:33.860 06:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:33.860 06:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:33.860 { 00:31:33.860 "params": { 00:31:33.860 "name": "Nvme$subsystem", 00:31:33.860 "trtype": "$TEST_TRANSPORT", 00:31:33.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.860 "adrfam": "ipv4", 00:31:33.860 "trsvcid": "$NVMF_PORT", 00:31:33.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.860 "hdgst": ${hdgst:-false}, 00:31:33.860 "ddgst": ${ddgst:-false} 00:31:33.860 }, 00:31:33.860 "method": "bdev_nvme_attach_controller" 00:31:33.860 } 00:31:33.860 EOF 00:31:33.860 )") 00:31:33.860 [2024-12-08 06:35:23.816061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.816121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 06:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:33.860 06:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:33.860 06:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:33.860 06:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:33.860 "params": { 00:31:33.860 "name": "Nvme1", 00:31:33.860 "trtype": "tcp", 00:31:33.860 "traddr": "10.0.0.2", 00:31:33.860 "adrfam": "ipv4", 00:31:33.860 "trsvcid": "4420", 00:31:33.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:33.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:33.860 "hdgst": false, 00:31:33.860 "ddgst": false 00:31:33.860 }, 00:31:33.860 "method": "bdev_nvme_attach_controller" 00:31:33.860 }' 00:31:33.860 [2024-12-08 06:35:23.823951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.823975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.831949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.831971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.839943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.839972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.847944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.847965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.855944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.855964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.856136] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:31:33.860 [2024-12-08 06:35:23.856194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217885 ] 00:31:33.860 [2024-12-08 06:35:23.863945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.863966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.871942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.871962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.879944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.879963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.887943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.887963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.895945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.895965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.903944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.903965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.911943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.911963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.919942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.919961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.925295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.860 [2024-12-08 06:35:23.927942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.927962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.936022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.936066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.943989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.944040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.951944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.951965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.959943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.959964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.967945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.967967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.860 [2024-12-08 06:35:23.975957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.860 [2024-12-08 06:35:23.975991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:23.983956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:23.983982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:23.987693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.117 [2024-12-08 06:35:23.991947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:23.991969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:23.999949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:23.999971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.008014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.008056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.015994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.016049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.023995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.024052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.032000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.032055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.039996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.040051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.047956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.047981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.055980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.056033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.063995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.064052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.071993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.072048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.079950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.079972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.087947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.087968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.095960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.095986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.103951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.103976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.111948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.111971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.119949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.119981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.127951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.127975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.135954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.135980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.143951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.143975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.151956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.151983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.159953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.159977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 Running I/O for 5 seconds... 00:31:34.117 [2024-12-08 06:35:24.176963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.176990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.187097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.187122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.201899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.201925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.211144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.211168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.224815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.224842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.117 [2024-12-08 06:35:24.234619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.117 [2024-12-08 06:35:24.234644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.374 [2024-12-08 06:35:24.248361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.374 [2024-12-08 06:35:24.248397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.374 [2024-12-08 06:35:24.257984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.374 [2024-12-08 06:35:24.258022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.374 [2024-12-08 06:35:24.269088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.374 [2024-12-08 06:35:24.269112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.374 [2024-12-08 06:35:24.278992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.374 [2024-12-08 06:35:24.279035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.374 [2024-12-08 06:35:24.291451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.374 [2024-12-08 06:35:24.291474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.374 [2024-12-08 06:35:24.304031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.374 [2024-12-08 06:35:24.304056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.374 [2024-12-08 06:35:24.313515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.374 [2024-12-08 06:35:24.313539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.374 [2024-12-08 06:35:24.324614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.374 [2024-12-08 06:35:24.324655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.374 [2024-12-08 06:35:24.334864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.374 [2024-12-08 06:35:24.334891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.374 [2024-12-08 06:35:24.349324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.374 [2024-12-08 06:35:24.349348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.374 [2024-12-08 06:35:24.358401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.374 [2024-12-08 06:35:24.358432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.374 [2024-12-08 06:35:24.369426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.374 [2024-12-08 06:35:24.369450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.374 [2024-12-08 06:35:24.379473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.374 [2024-12-08 06:35:24.379496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.374 [2024-12-08 06:35:24.392385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.374 [2024-12-08 06:35:24.392417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.374 [2024-12-08 06:35:24.401365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.374 [2024-12-08 06:35:24.401389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.374 [2024-12-08 06:35:24.412717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.375 [2024-12-08 06:35:24.412750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.375 [2024-12-08 06:35:24.422741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.375 [2024-12-08 06:35:24.422765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.375 [2024-12-08 06:35:24.436053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.375 [2024-12-08 06:35:24.436096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.375 [2024-12-08 06:35:24.445208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.375 [2024-12-08 06:35:24.445231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.375 [2024-12-08 06:35:24.456140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.375 [2024-12-08 06:35:24.456164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.375 [2024-12-08 06:35:24.465458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.375 [2024-12-08 06:35:24.465481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.375 [2024-12-08 06:35:24.476118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.375 [2024-12-08 06:35:24.476152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.375 [2024-12-08 06:35:24.486242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.375 [2024-12-08 06:35:24.486266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.630 [2024-12-08 06:35:24.500313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.630 [2024-12-08 06:35:24.500337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.509101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.509124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.520476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.520499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.530662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.530693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.545316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.545340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.554456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.554480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.565470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.565494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.576066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.576103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.587131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.587155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.601100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.601124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.610302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.610326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.621448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.621471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.632017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.632041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.642276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.642300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.657318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.657343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.666661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.666684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.679941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.679967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.689160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.689183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.700479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.700503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.710903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.710929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.723171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.723194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.737849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.737875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.631 [2024-12-08 06:35:24.746919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.631 [2024-12-08 06:35:24.746944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.889 [2024-12-08 06:35:24.761885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.889 [2024-12-08 06:35:24.761912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.889 [2024-12-08 06:35:24.771436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.889 [2024-12-08 06:35:24.771459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.889 [2024-12-08 06:35:24.782637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.889 [2024-12-08 06:35:24.782660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.889 [2024-12-08 06:35:24.795257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.889 [2024-12-08 06:35:24.795280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.889 [2024-12-08 06:35:24.804796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.889 [2024-12-08 06:35:24.804821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.889 [2024-12-08 06:35:24.816011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.889 [2024-12-08 06:35:24.816036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.889 [2024-12-08 06:35:24.826291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.889 [2024-12-08 06:35:24.826315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.890 [2024-12-08 06:35:24.840322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.890 [2024-12-08 06:35:24.840345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.890 [2024-12-08 06:35:24.848964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.890 [2024-12-08 06:35:24.848990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.890 [2024-12-08 06:35:24.859937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.890 [2024-12-08 06:35:24.859963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.890 [2024-12-08 06:35:24.870387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.890 [2024-12-08 06:35:24.870410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.890 [2024-12-08 06:35:24.884901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.890 [2024-12-08 06:35:24.884927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.890 [2024-12-08 06:35:24.893956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.890 [2024-12-08 06:35:24.893982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.890 [2024-12-08 06:35:24.905038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.890 [2024-12-08 06:35:24.905062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.890 [2024-12-08 06:35:24.915169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.890 [2024-12-08 06:35:24.915194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.890 [2024-12-08 06:35:24.930264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.890 [2024-12-08 06:35:24.930288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.890 [2024-12-08 06:35:24.939422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.890 [2024-12-08 06:35:24.939446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.890 [2024-12-08 06:35:24.950521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.890 [2024-12-08 06:35:24.950545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.890 [2024-12-08 06:35:24.965236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.890 [2024-12-08 06:35:24.965260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.890 [2024-12-08 06:35:24.973898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.890 [2024-12-08 06:35:24.973923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.890 [2024-12-08 06:35:24.985053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.890 [2024-12-08 06:35:24.985091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.890 [2024-12-08 06:35:24.995514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.890 [2024-12-08 06:35:24.995539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.890 [2024-12-08 06:35:25.006259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.890 [2024-12-08 06:35:25.006285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.150 [2024-12-08 06:35:25.020208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.150 [2024-12-08 06:35:25.020234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.029441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.029465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.040494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.040518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.049794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.049820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.061224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.061248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.071697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.071744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.082286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.082310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.094781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.094808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.109021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.109047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.118749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.118789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.129807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.129833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.145891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.145918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.155230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.155254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.166410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.166435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 12201.00 IOPS, 95.32 MiB/s [2024-12-08T05:35:25.270Z] [2024-12-08 06:35:25.181050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.181090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.190145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.190170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.201597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.201621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.212323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.212346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.223028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.223052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.235391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.235415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.245133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.245157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.256266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.256289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.151 [2024-12-08 06:35:25.266264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.151 [2024-12-08 06:35:25.266288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.281998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.282037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.291503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.291526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.303019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.303044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.315936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.315961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.325566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.325590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.337314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.337338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.347853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.347881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.358270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.358294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.372751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.372780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.382707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.382766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.398133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.398158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.407474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.407498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.418473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.418498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.432320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.432344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.442347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.442371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.453434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.453458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.463519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.463543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.476324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.476348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.485561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.485584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.496843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.496868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.507274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.507297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.519798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.519823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.411 [2024-12-08 06:35:25.529810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.411 [2024-12-08 06:35:25.529837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.541561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.541586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.556613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.556638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.566448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.566471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.578087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.578111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.593528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.593553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.602869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.602904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.616768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.616794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.625690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.625739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.636842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.636867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.646844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.646869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.659567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.659590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.669225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.669249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.680490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.680514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.691223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.691246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.704323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.704347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.713571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.713595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.725117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.725141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.735606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.735629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.748150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.748174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.757397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.757421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.768457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.768481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.778885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.778922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.791998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.792038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.802120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.802143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.701 [2024-12-08 06:35:25.813433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.701 [2024-12-08 06:35:25.813464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:25.830376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:25.830401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:25.840209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:25.840234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:25.851665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:25.851689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:25.862625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:25.862648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:25.875555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:25.875579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:25.884854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:25.884880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:25.896194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:25.896217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:25.906651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:25.906674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:25.922575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:25.922599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:25.937467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:25.937491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:25.946820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:25.946845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:25.958483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:25.958507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:25.972482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:25.972506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:25.981815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:25.981840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:25.993124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:25.993147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:26.002783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:26.002808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:26.017783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:26.017811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:26.027981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:26.028029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:26.040146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:26.040181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:26.050960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:26.050986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:26.064960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:26.064987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.962 [2024-12-08 06:35:26.075189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.962 [2024-12-08 06:35:26.075215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.087170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.087196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.100888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.100915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.110312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.110336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.124452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.124477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.134208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.134232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.145882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.145909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.161128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.161154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.171068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.171108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 12076.00 IOPS, 94.34 MiB/s [2024-12-08T05:35:26.339Z] [2024-12-08 06:35:26.183232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.183257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.194427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.194451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.209556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.209580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.219497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.219522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.231063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.231103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.241666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.241691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.256232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.256257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.265867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.265894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.277571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.277596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.292851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.292878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.302385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.302410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.314157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.314181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.220 [2024-12-08 06:35:26.329645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.220 [2024-12-08 06:35:26.329670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.479 [2024-12-08 06:35:26.339414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.479 [2024-12-08 06:35:26.339439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.479 [2024-12-08 06:35:26.350731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.479 [2024-12-08 06:35:26.350758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.479 [2024-12-08 06:35:26.365125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.479 [2024-12-08 06:35:26.365150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.479 [2024-12-08 06:35:26.374935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.479 [2024-12-08 06:35:26.374962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.479 [2024-12-08 06:35:26.389749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.479 [2024-12-08 06:35:26.389785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.479 [2024-12-08 06:35:26.399192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.479 [2024-12-08 06:35:26.399216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.479 [2024-12-08 06:35:26.411169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.479 [2024-12-08 06:35:26.411193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.479 [2024-12-08 06:35:26.421847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.479 [2024-12-08 06:35:26.421873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.479 [2024-12-08 06:35:26.436467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.479 [2024-12-08 06:35:26.436492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.479 [2024-12-08 06:35:26.445671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.479 [2024-12-08 06:35:26.445710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.479 [2024-12-08 06:35:26.457226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.479 [2024-12-08 06:35:26.457251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.479 [2024-12-08 06:35:26.467875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.479 [2024-12-08 06:35:26.467901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.480 [2024-12-08 06:35:26.478667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.480 [2024-12-08 06:35:26.478692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.480 [2024-12-08 06:35:26.491985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.480 [2024-12-08 06:35:26.492025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.480 [2024-12-08 06:35:26.501880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.480 [2024-12-08 06:35:26.501907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.480 [2024-12-08 06:35:26.513870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.480 [2024-12-08 06:35:26.513897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.480 [2024-12-08 06:35:26.529052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.480 [2024-12-08 06:35:26.529094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.480 [2024-12-08 06:35:26.538680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.480 [2024-12-08 06:35:26.538728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.480 [2024-12-08 06:35:26.553816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.480 [2024-12-08 06:35:26.553847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.480 [2024-12-08 06:35:26.563456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.480 [2024-12-08 06:35:26.563481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.480 [2024-12-08 06:35:26.575444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.480 [2024-12-08 06:35:26.575469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.480 [2024-12-08 06:35:26.586468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.480 [2024-12-08 06:35:26.586493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.601137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.601163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.610848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.610893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.624903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.624931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.635320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.635346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.646769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.646795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.660650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.660675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.670107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.670134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.682053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.682093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.698257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.698282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.714299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.714337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.724335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.724360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.735950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.735976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.746035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.746059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.761993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.762032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.771505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.771530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.783560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.783585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.794925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.794951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.807624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.807649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.816867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.816894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.828917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.828943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.839978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.740 [2024-12-08 06:35:26.840019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.740 [2024-12-08 06:35:26.850998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.741 [2024-12-08 06:35:26.851038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:26.861976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:26.862004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:26.876910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:26.876938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:26.887241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:26.887266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:26.899357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:26.899382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:26.910393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:26.910417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:26.925409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:26.925434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:26.935481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:26.935519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:26.947589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:26.947614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:26.958389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:26.958414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:26.972714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:26.972757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:26.983043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:26.983084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:26.994502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:26.994527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:27.007411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:27.007436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:27.017295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:27.017320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:27.029201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:27.029226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:27.041167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:27.041192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:27.051278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:27.051303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:27.062594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:27.062618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:27.076679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:27.076728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:27.086492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:27.086517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:27.098362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:27.098386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.000 [2024-12-08 06:35:27.111486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.000 [2024-12-08 06:35:27.111510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.260 [2024-12-08 06:35:27.121048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.260 [2024-12-08 06:35:27.121089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.260 [2024-12-08 06:35:27.132494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.260 [2024-12-08 06:35:27.132525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.260 [2024-12-08 06:35:27.143285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.260 [2024-12-08 06:35:27.143309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.260 [2024-12-08 06:35:27.156255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.260 [2024-12-08 06:35:27.156290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.260 [2024-12-08 06:35:27.165499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.260 [2024-12-08 06:35:27.165523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.260 11939.33 IOPS, 93.28 MiB/s [2024-12-08T05:35:27.379Z] [2024-12-08 06:35:27.176527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.260 [2024-12-08 06:35:27.176552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.260 [2024-12-08 06:35:27.187349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.260 [2024-12-08 06:35:27.187374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.260 [2024-12-08 06:35:27.200904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.260 [2024-12-08 06:35:27.200929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.260 [2024-12-08 06:35:27.210250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.260 [2024-12-08 06:35:27.210274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.260 [2024-12-08 06:35:27.221686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.260 [2024-12-08 06:35:27.221735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.260 [2024-12-08 06:35:27.231949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.260 [2024-12-08 06:35:27.231973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.260 [2024-12-08 06:35:27.242418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.260 [2024-12-08 06:35:27.242441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.260 [2024-12-08 06:35:27.256514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.260 [2024-12-08 06:35:27.256538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.260 [2024-12-08 06:35:27.265826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.260 [2024-12-08 06:35:27.265852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.260 [2024-12-08 06:35:27.277280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.260 [2024-12-08 06:35:27.277304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.260 [2024-12-08 06:35:27.287830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.260 [2024-12-08 06:35:27.287855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.260 [2024-12-08 06:35:27.297963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.261 [2024-12-08 06:35:27.297989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.261 [2024-12-08 06:35:27.313743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.261 [2024-12-08 06:35:27.313769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.261 [2024-12-08 06:35:27.323235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.261 [2024-12-08 06:35:27.323259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.261 [2024-12-08 06:35:27.334594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.261 [2024-12-08 06:35:27.334617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.261 [2024-12-08 06:35:27.348616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.261 [2024-12-08 06:35:27.348639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.261 [2024-12-08 06:35:27.357854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.261 [2024-12-08 06:35:27.357880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.261 [2024-12-08 06:35:27.369173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.261 [2024-12-08 06:35:27.369210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.261 [2024-12-08 06:35:27.379309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.261 [2024-12-08 06:35:27.379333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.518 [2024-12-08 06:35:27.392905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.518 [2024-12-08 06:35:27.392933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.518 [2024-12-08 06:35:27.402474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.518 [2024-12-08 06:35:27.402499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.518 [2024-12-08 06:35:27.415920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.415948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.426334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.426359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.440838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.440864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.449545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.449569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.460714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.460749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.470755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.470789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.484263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.484287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.493440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.493464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.508680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.508719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.517840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.517866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.529386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.529411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.539964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.539989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.550461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.550486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.565660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.565684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.575059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.575098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.588446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.588470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.598173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.598198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.609645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.609669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.620194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.620218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.519 [2024-12-08 06:35:27.630881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.519 [2024-12-08 06:35:27.630907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.779 [2024-12-08 06:35:27.645030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.779 [2024-12-08 06:35:27.645056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.779 [2024-12-08 06:35:27.654925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.779 [2024-12-08 06:35:27.654952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.779 [2024-12-08 06:35:27.670309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.779 [2024-12-08 06:35:27.670333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.779 [2024-12-08 06:35:27.680109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.779 [2024-12-08 06:35:27.680134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.779 [2024-12-08 06:35:27.691594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.780 [2024-12-08 06:35:27.691619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.780 [2024-12-08 06:35:27.702150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.780 [2024-12-08 06:35:27.702174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.780 [2024-12-08 06:35:27.717408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.780 [2024-12-08 06:35:27.717432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.780 [2024-12-08 06:35:27.726971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.780 [2024-12-08 06:35:27.727010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.780 [2024-12-08 06:35:27.741144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.780 [2024-12-08 06:35:27.741169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.780 [2024-12-08 06:35:27.750187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.780 [2024-12-08 06:35:27.750211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.780 [2024-12-08 06:35:27.761451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.780 [2024-12-08 06:35:27.761475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.780 [2024-12-08 06:35:27.771865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.780 [2024-12-08 06:35:27.771890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.780 [2024-12-08 06:35:27.782527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.780 [2024-12-08 06:35:27.782550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.780 [2024-12-08 06:35:27.797299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.780 [2024-12-08 06:35:27.797323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.780 [2024-12-08 06:35:27.807086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.780 [2024-12-08 06:35:27.807110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.780 [2024-12-08 06:35:27.822274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.780 [2024-12-08 06:35:27.822299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.780 [2024-12-08 06:35:27.839123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.780 [2024-12-08 06:35:27.839147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.780 [2024-12-08 06:35:27.852874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.780 [2024-12-08 06:35:27.852899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.780 [2024-12-08 06:35:27.862581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.780 [2024-12-08 06:35:27.862605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.780 [2024-12-08 06:35:27.874404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.780 [2024-12-08 06:35:27.874428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.780 [2024-12-08 06:35:27.887527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.780 [2024-12-08 06:35:27.887551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.780 [2024-12-08 06:35:27.897485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.780 [2024-12-08 06:35:27.897510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:27.909404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:27.909429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:27.919954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:27.919980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:27.930242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:27.930265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:27.944965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:27.944990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:27.954315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:27.954338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:27.965822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:27.965847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:27.976338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:27.976362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:27.986829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:27.986856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:28.002102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:28.002126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:28.010770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:28.010796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:28.022288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:28.022312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:28.036393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:28.036417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:28.046479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:28.046504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:28.058121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:28.058145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:28.072154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:28.072179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:28.081289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:28.081313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:28.092621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:28.092645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:28.103047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:28.103071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:28.115782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:28.115807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:28.125333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:28.125357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:28.136735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:28.136761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.039 [2024-12-08 06:35:28.146867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.039 [2024-12-08 06:35:28.146895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.161814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.161843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.171950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.171987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 11962.00 IOPS, 93.45 MiB/s [2024-12-08T05:35:28.419Z] [2024-12-08 06:35:28.183849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.183879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.194941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.194970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.210523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.210548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.225431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.225456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.235520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.235545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.247286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.247322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.258278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.258303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.271935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.271961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.281535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.281560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.293164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.293189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.303596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.303620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.314466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.314490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.327801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.327827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.337375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.337401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.349366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.349391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.359904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.359931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.370848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.370875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.385526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.385551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.395217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.395242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.300 [2024-12-08 06:35:28.406630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.300 [2024-12-08 06:35:28.406654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.420933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.420960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.430688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.430737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.446391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.446415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.461144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.461169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.471121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.471156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.482902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.482947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.496239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.496263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.506085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.506110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.517766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.517792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.532228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.532252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.541910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.541937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.553278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.553303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.569394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.569419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.578717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.578766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.593419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.593444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.602787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.602813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.617472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.617498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.627662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.627687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.639373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.639398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.650228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.650253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.665776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.665803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.561 [2024-12-08 06:35:28.675753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.561 [2024-12-08 06:35:28.675791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.688020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.688047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.698436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.698474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.714600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.714625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.729051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.729092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.739268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.739293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.750635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.750659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.765308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.765333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.774411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.774436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.789874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.789901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.798735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.798762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.811829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.811856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.821457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.821482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.833714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.833750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.845124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.845165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.855835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.855863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.866286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.866310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.880875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.880902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.890518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.890543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.901891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.901918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.918246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.918272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.820 [2024-12-08 06:35:28.928222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.820 [2024-12-08 06:35:28.928247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:28.940190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:28.940216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:28.951102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:28.951127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:28.962087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:28.962111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:28.976844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:28.976871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:28.987420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:28.987445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:28.998693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:28.998742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:29.014393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:29.014418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:29.023978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:29.024021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:29.035902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:29.035928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:29.046058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:29.046083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:29.060206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:29.060231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:29.069407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:29.069431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:29.081370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:29.081395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:29.092207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:29.092232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:29.103252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:29.103277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:29.114658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:29.114682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:29.128400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:29.128425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:29.137971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:29.137997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:29.149391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:29.149416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:29.159856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:29.159884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:29.171281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:29.171305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 11905.00 IOPS, 93.01 MiB/s [2024-12-08T05:35:29.200Z] [2024-12-08 06:35:29.182310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:29.182335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 [2024-12-08 06:35:29.187957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:29.187984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.081 00:31:39.081 Latency(us) 00:31:39.081 [2024-12-08T05:35:29.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.081 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:39.081 Nvme1n1 : 5.01 11906.19 93.02 0.00 0.00 10736.21 2864.17 17767.54 00:31:39.081 [2024-12-08T05:35:29.200Z] =================================================================================================================== 00:31:39.081 [2024-12-08T05:35:29.200Z] Total : 11906.19 93.02 0.00 0.00 10736.21 2864.17 17767.54 00:31:39.081 [2024-12-08 06:35:29.195956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.081 [2024-12-08 06:35:29.195982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.203954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.203980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.211990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.212025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.220030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.220081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.228023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.228079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.236026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.236079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.244027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.244080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.252011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.252059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.260028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.260083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.268025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.268078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.276030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.276095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.284032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.284086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.292026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.292079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.304062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.304129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.312019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.312069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.320027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.320079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.328016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.328070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.336008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.336059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.343946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.343968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.351944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.351964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.359943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.359964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.367942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.367962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.376007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.376054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.384020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.384071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.392010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.392073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.399947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.399968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.407945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.407966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 [2024-12-08 06:35:29.415948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.343 [2024-12-08 06:35:29.415969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1217885) - No such process 00:31:39.343 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1217885 00:31:39.343 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.343 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.343 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:39.343 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.343 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:39.343 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.343 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:39.343 delay0 00:31:39.343 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.343 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:39.343 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.343 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:39.343 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.343 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:39.619 [2024-12-08 06:35:29.494235] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:47.742 Initializing NVMe Controllers 00:31:47.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:47.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:47.742 Initialization complete. Launching workers. 00:31:47.742 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 243, failed: 20499 00:31:47.742 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 20619, failed to submit 123 00:31:47.742 success 20536, unsuccessful 83, failed 0 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:47.742 rmmod nvme_tcp 00:31:47.742 rmmod nvme_fabrics 00:31:47.742 rmmod nvme_keyring 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1216679 ']' 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1216679 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1216679 ']' 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1216679 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1216679 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1216679' 00:31:47.742 killing process with pid 1216679 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1216679 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1216679 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:47.742 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:47.743 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:47.743 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:47.743 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:47.743 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:47.743 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:47.743 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:47.743 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:47.743 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.743 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.743 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.125 06:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:49.125 00:31:49.125 real 0m28.576s 00:31:49.125 user 0m38.917s 00:31:49.125 sys 0m11.696s 00:31:49.125 06:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.125 06:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:49.125 ************************************ 00:31:49.125 END TEST nvmf_zcopy 00:31:49.125 ************************************ 00:31:49.125 06:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:49.125 06:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:49.125 06:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:49.125 06:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:49.125 ************************************ 00:31:49.125 START TEST nvmf_nmic 00:31:49.125 ************************************ 00:31:49.125 06:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:49.125 * Looking for test storage... 00:31:49.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:49.125 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:49.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.126 --rc genhtml_branch_coverage=1 00:31:49.126 --rc genhtml_function_coverage=1 00:31:49.126 --rc genhtml_legend=1 00:31:49.126 --rc geninfo_all_blocks=1 00:31:49.126 --rc geninfo_unexecuted_blocks=1 00:31:49.126 00:31:49.126 ' 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:49.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.126 --rc genhtml_branch_coverage=1 00:31:49.126 --rc genhtml_function_coverage=1 00:31:49.126 --rc genhtml_legend=1 00:31:49.126 --rc geninfo_all_blocks=1 00:31:49.126 --rc geninfo_unexecuted_blocks=1 00:31:49.126 00:31:49.126 ' 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:49.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.126 --rc genhtml_branch_coverage=1 00:31:49.126 --rc genhtml_function_coverage=1 00:31:49.126 --rc genhtml_legend=1 00:31:49.126 --rc geninfo_all_blocks=1 00:31:49.126 --rc geninfo_unexecuted_blocks=1 00:31:49.126 00:31:49.126 ' 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:49.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.126 --rc genhtml_branch_coverage=1 00:31:49.126 --rc genhtml_function_coverage=1 00:31:49.126 --rc genhtml_legend=1 00:31:49.126 --rc geninfo_all_blocks=1 00:31:49.126 --rc geninfo_unexecuted_blocks=1 00:31:49.126 00:31:49.126 ' 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.126 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:49.127 06:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.725 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:51.725 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:51.725 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:51.725 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:51.725 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:51.725 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:51.725 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:51.725 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:51.725 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:51.725 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:51.725 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:51.725 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:51.725 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:51.725 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:51.726 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:51.726 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:51.726 Found net devices under 0000:84:00.0: cvl_0_0 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:51.726 Found net devices under 0000:84:00.1: cvl_0_1 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:51.726 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:51.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:51.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:31:51.727 00:31:51.727 --- 10.0.0.2 ping statistics --- 00:31:51.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.727 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:51.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:51.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:31:51.727 00:31:51.727 --- 10.0.0.1 ping statistics --- 00:31:51.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.727 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1221394 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1221394 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1221394 ']' 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.727 [2024-12-08 06:35:41.421355] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:51.727 [2024-12-08 06:35:41.422460] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:31:51.727 [2024-12-08 06:35:41.422514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:51.727 [2024-12-08 06:35:41.498630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:51.727 [2024-12-08 06:35:41.559808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:51.727 [2024-12-08 06:35:41.559870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:51.727 [2024-12-08 06:35:41.559900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:51.727 [2024-12-08 06:35:41.559911] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:51.727 [2024-12-08 06:35:41.559921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:51.727 [2024-12-08 06:35:41.561687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.727 [2024-12-08 06:35:41.561756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:51.727 [2024-12-08 06:35:41.561784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:51.727 [2024-12-08 06:35:41.561787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.727 [2024-12-08 06:35:41.651101] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:51.727 [2024-12-08 06:35:41.651332] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:51.727 [2024-12-08 06:35:41.651640] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:51.727 [2024-12-08 06:35:41.652316] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:51.727 [2024-12-08 06:35:41.652542] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.727 [2024-12-08 06:35:41.710495] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.727 Malloc0 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.727 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.728 [2024-12-08 06:35:41.770742] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:51.728 test case1: single bdev can't be used in multiple subsystems 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.728 [2024-12-08 06:35:41.794435] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:51.728 [2024-12-08 06:35:41.794465] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:51.728 [2024-12-08 06:35:41.794496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:51.728 request: 00:31:51.728 { 00:31:51.728 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:51.728 "namespace": { 00:31:51.728 "bdev_name": "Malloc0", 00:31:51.728 "no_auto_visible": false, 00:31:51.728 "hide_metadata": false 00:31:51.728 }, 00:31:51.728 "method": "nvmf_subsystem_add_ns", 00:31:51.728 "req_id": 1 00:31:51.728 } 00:31:51.728 Got JSON-RPC error response 00:31:51.728 response: 00:31:51.728 { 00:31:51.728 "code": -32602, 00:31:51.728 "message": "Invalid parameters" 00:31:51.728 } 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:51.728 Adding namespace failed - expected result. 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:51.728 test case2: host connect to nvmf target in multiple paths 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:51.728 [2024-12-08 06:35:41.802520] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.728 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:51.989 06:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:52.247 06:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:52.247 06:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:52.247 06:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:52.247 06:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:52.247 06:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:54.146 06:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:54.146 06:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:54.146 06:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:54.146 06:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:54.146 06:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:54.146 06:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:54.146 06:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:54.146 [global] 00:31:54.146 thread=1 00:31:54.146 invalidate=1 00:31:54.146 rw=write 00:31:54.146 time_based=1 00:31:54.146 runtime=1 00:31:54.146 ioengine=libaio 00:31:54.146 direct=1 00:31:54.146 bs=4096 00:31:54.146 iodepth=1 00:31:54.146 norandommap=0 00:31:54.146 numjobs=1 00:31:54.146 00:31:54.146 verify_dump=1 00:31:54.146 verify_backlog=512 00:31:54.146 verify_state_save=0 00:31:54.146 do_verify=1 00:31:54.146 verify=crc32c-intel 00:31:54.146 [job0] 00:31:54.146 filename=/dev/nvme0n1 00:31:54.404 Could not set queue depth (nvme0n1) 00:31:54.404 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:54.404 fio-3.35 00:31:54.404 Starting 1 thread 00:31:55.778 00:31:55.778 job0: (groupid=0, jobs=1): err= 0: pid=1221784: Sun Dec 8 06:35:45 2024 00:31:55.778 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:31:55.778 slat (nsec): min=12261, max=34547, avg=16366.73, stdev=4229.35 00:31:55.778 clat (usec): min=40749, max=41064, avg=40965.36, stdev=68.85 00:31:55.778 lat (usec): min=40761, max=41079, avg=40981.73, stdev=67.30 00:31:55.778 clat percentiles (usec): 00:31:55.778 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:55.778 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:55.778 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:55.778 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:55.778 | 99.99th=[41157] 00:31:55.778 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:31:55.778 slat (usec): min=9, max=783, avg=18.61, stdev=34.90 00:31:55.778 clat (usec): min=143, max=361, avg=185.56, stdev=32.92 00:31:55.778 lat (usec): min=153, max=1012, avg=204.17, stdev=53.02 00:31:55.778 clat percentiles (usec): 00:31:55.778 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 161], 00:31:55.778 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:31:55.778 | 70.00th=[ 194], 80.00th=[ 219], 90.00th=[ 233], 95.00th=[ 249], 00:31:55.778 | 99.00th=[ 273], 99.50th=[ 359], 99.90th=[ 363], 99.95th=[ 363], 00:31:55.778 | 99.99th=[ 363] 00:31:55.778 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:55.778 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:55.778 lat (usec) : 250=92.32%, 500=3.56% 00:31:55.778 lat (msec) : 50=4.12% 00:31:55.778 cpu : usr=0.50%, sys=1.19%, ctx=537, majf=0, minf=1 00:31:55.778 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:55.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.778 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.778 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:55.778 00:31:55.778 Run status group 0 (all jobs): 00:31:55.778 READ: bw=87.2KiB/s (89.3kB/s), 87.2KiB/s-87.2KiB/s (89.3kB/s-89.3kB/s), io=88.0KiB (90.1kB), run=1009-1009msec 00:31:55.778 WRITE: bw=2030KiB/s (2078kB/s), 2030KiB/s-2030KiB/s (2078kB/s-2078kB/s), io=2048KiB (2097kB), run=1009-1009msec 00:31:55.778 00:31:55.778 Disk stats (read/write): 00:31:55.778 nvme0n1: ios=76/512, merge=0/0, ticks=946/94, in_queue=1040, util=98.60% 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:55.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:55.778 rmmod nvme_tcp 00:31:55.778 rmmod nvme_fabrics 00:31:55.778 rmmod nvme_keyring 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1221394 ']' 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1221394 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1221394 ']' 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1221394 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1221394 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1221394' 00:31:55.778 killing process with pid 1221394 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1221394 00:31:55.778 06:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1221394 00:31:56.035 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:56.036 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:56.036 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:56.036 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:56.036 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:56.036 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:56.036 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:56.036 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:56.036 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:56.036 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.036 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:56.036 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:58.575 00:31:58.575 real 0m9.103s 00:31:58.575 user 0m17.132s 00:31:58.575 sys 0m3.242s 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:58.575 ************************************ 00:31:58.575 END TEST nvmf_nmic 00:31:58.575 ************************************ 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:58.575 ************************************ 00:31:58.575 START TEST nvmf_fio_target 00:31:58.575 ************************************ 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:58.575 * Looking for test storage... 00:31:58.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:58.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.575 --rc genhtml_branch_coverage=1 00:31:58.575 --rc genhtml_function_coverage=1 00:31:58.575 --rc genhtml_legend=1 00:31:58.575 --rc geninfo_all_blocks=1 00:31:58.575 --rc geninfo_unexecuted_blocks=1 00:31:58.575 00:31:58.575 ' 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:58.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.575 --rc genhtml_branch_coverage=1 00:31:58.575 --rc genhtml_function_coverage=1 00:31:58.575 --rc genhtml_legend=1 00:31:58.575 --rc geninfo_all_blocks=1 00:31:58.575 --rc geninfo_unexecuted_blocks=1 00:31:58.575 00:31:58.575 ' 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:58.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.575 --rc genhtml_branch_coverage=1 00:31:58.575 --rc genhtml_function_coverage=1 00:31:58.575 --rc genhtml_legend=1 00:31:58.575 --rc geninfo_all_blocks=1 00:31:58.575 --rc geninfo_unexecuted_blocks=1 00:31:58.575 00:31:58.575 ' 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:58.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.575 --rc genhtml_branch_coverage=1 00:31:58.575 --rc genhtml_function_coverage=1 00:31:58.575 --rc genhtml_legend=1 00:31:58.575 --rc geninfo_all_blocks=1 00:31:58.575 --rc geninfo_unexecuted_blocks=1 00:31:58.575 00:31:58.575 ' 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.575 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:58.576 06:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:00.479 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:00.479 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:00.479 Found net devices under 0000:84:00.0: cvl_0_0 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:00.479 Found net devices under 0000:84:00.1: cvl_0_1 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.479 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:00.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:32:00.480 00:32:00.480 --- 10.0.0.2 ping statistics --- 00:32:00.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.480 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:32:00.480 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:32:00.737 00:32:00.737 --- 10.0.0.1 ping statistics --- 00:32:00.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.737 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1223993 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1223993 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1223993 ']' 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.737 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.737 [2024-12-08 06:35:50.677488] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:00.737 [2024-12-08 06:35:50.678624] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:32:00.737 [2024-12-08 06:35:50.678679] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.737 [2024-12-08 06:35:50.753824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:00.737 [2024-12-08 06:35:50.811054] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.737 [2024-12-08 06:35:50.811126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.737 [2024-12-08 06:35:50.811139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.737 [2024-12-08 06:35:50.811150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.737 [2024-12-08 06:35:50.811169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.737 [2024-12-08 06:35:50.812846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.737 [2024-12-08 06:35:50.812907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:00.737 [2024-12-08 06:35:50.812973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:00.737 [2024-12-08 06:35:50.812977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.994 [2024-12-08 06:35:50.900366] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:00.994 [2024-12-08 06:35:50.900535] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:00.994 [2024-12-08 06:35:50.900834] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:00.994 [2024-12-08 06:35:50.901458] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:00.994 [2024-12-08 06:35:50.901650] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:00.994 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.994 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:00.994 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:00.994 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.994 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.994 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.994 06:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:01.251 [2024-12-08 06:35:51.201668] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:01.251 06:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:01.509 06:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:01.509 06:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:01.768 06:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:01.768 06:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:02.339 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:02.339 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:02.339 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:02.339 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:02.909 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:02.909 06:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:02.909 06:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:03.478 06:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:03.478 06:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:03.478 06:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:03.478 06:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:03.737 06:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:04.303 06:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:04.303 06:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:04.303 06:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:04.303 06:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:04.561 06:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:04.820 [2024-12-08 06:35:54.889884] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:04.820 06:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:05.078 06:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:05.643 06:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:05.643 06:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:05.643 06:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:05.643 06:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:05.643 06:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:05.643 06:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:05.643 06:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:07.547 06:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:07.547 06:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:07.547 06:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:07.547 06:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:07.547 06:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:07.547 06:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:07.547 06:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:07.547 [global] 00:32:07.547 thread=1 00:32:07.547 invalidate=1 00:32:07.547 rw=write 00:32:07.547 time_based=1 00:32:07.547 runtime=1 00:32:07.547 ioengine=libaio 00:32:07.547 direct=1 00:32:07.547 bs=4096 00:32:07.547 iodepth=1 00:32:07.547 norandommap=0 00:32:07.547 numjobs=1 00:32:07.547 00:32:07.547 verify_dump=1 00:32:07.547 verify_backlog=512 00:32:07.547 verify_state_save=0 00:32:07.547 do_verify=1 00:32:07.547 verify=crc32c-intel 00:32:07.547 [job0] 00:32:07.547 filename=/dev/nvme0n1 00:32:07.547 [job1] 00:32:07.547 filename=/dev/nvme0n2 00:32:07.547 [job2] 00:32:07.547 filename=/dev/nvme0n3 00:32:07.547 [job3] 00:32:07.547 filename=/dev/nvme0n4 00:32:07.805 Could not set queue depth (nvme0n1) 00:32:07.805 Could not set queue depth (nvme0n2) 00:32:07.806 Could not set queue depth (nvme0n3) 00:32:07.806 Could not set queue depth (nvme0n4) 00:32:07.806 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.806 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.806 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.806 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.806 fio-3.35 00:32:07.806 Starting 4 threads 00:32:09.180 00:32:09.180 job0: (groupid=0, jobs=1): err= 0: pid=1224939: Sun Dec 8 06:35:59 2024 00:32:09.180 read: IOPS=298, BW=1195KiB/s (1223kB/s)(1196KiB/1001msec) 00:32:09.180 slat (nsec): min=7410, max=43586, avg=17091.66, stdev=4315.32 00:32:09.180 clat (usec): min=242, max=41055, avg=2915.21, stdev=9980.67 00:32:09.180 lat (usec): min=262, max=41066, avg=2932.30, stdev=9980.35 00:32:09.180 clat percentiles (usec): 00:32:09.180 | 1.00th=[ 249], 5.00th=[ 253], 10.00th=[ 255], 20.00th=[ 260], 00:32:09.180 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 269], 00:32:09.180 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[41157], 00:32:09.180 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:09.180 | 99.99th=[41157] 00:32:09.180 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:32:09.180 slat (nsec): min=8186, max=22386, avg=9409.94, stdev=1295.59 00:32:09.180 clat (usec): min=158, max=355, avg=225.26, stdev=17.31 00:32:09.180 lat (usec): min=170, max=377, avg=234.67, stdev=17.58 00:32:09.180 clat percentiles (usec): 00:32:09.180 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 217], 00:32:09.180 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 231], 00:32:09.180 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 243], 95.00th=[ 247], 00:32:09.180 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 355], 99.95th=[ 355], 00:32:09.180 | 99.99th=[ 355] 00:32:09.180 bw ( KiB/s): min= 4096, max= 4096, per=28.09%, avg=4096.00, stdev= 0.00, samples=1 00:32:09.180 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:09.180 lat (usec) : 250=61.65%, 500=35.88% 00:32:09.180 lat (msec) : 20=0.12%, 50=2.34% 00:32:09.180 cpu : usr=0.40%, sys=1.10%, ctx=811, majf=0, minf=1 00:32:09.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:09.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.180 issued rwts: total=299,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:09.180 job1: (groupid=0, jobs=1): err= 0: pid=1224940: Sun Dec 8 06:35:59 2024 00:32:09.180 read: IOPS=22, BW=88.6KiB/s (90.8kB/s)(92.0KiB/1038msec) 00:32:09.180 slat (nsec): min=7292, max=46857, avg=18490.61, stdev=12067.55 00:32:09.180 clat (usec): min=40500, max=41054, avg=40955.20, stdev=106.86 00:32:09.180 lat (usec): min=40508, max=41071, avg=40973.69, stdev=108.21 00:32:09.180 clat percentiles (usec): 00:32:09.180 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:09.180 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:09.180 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:09.180 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:09.180 | 99.99th=[41157] 00:32:09.180 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:32:09.180 slat (nsec): min=6054, max=32166, avg=7466.86, stdev=1355.46 00:32:09.180 clat (usec): min=152, max=253, avg=176.75, stdev= 8.92 00:32:09.180 lat (usec): min=159, max=285, avg=184.21, stdev= 9.41 00:32:09.180 clat percentiles (usec): 00:32:09.180 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:32:09.180 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 178], 00:32:09.180 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 188], 95.00th=[ 190], 00:32:09.180 | 99.00th=[ 200], 99.50th=[ 204], 99.90th=[ 253], 99.95th=[ 253], 00:32:09.180 | 99.99th=[ 253] 00:32:09.180 bw ( KiB/s): min= 4096, max= 4096, per=28.09%, avg=4096.00, stdev= 0.00, samples=1 00:32:09.180 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:09.180 lat (usec) : 250=95.51%, 500=0.19% 00:32:09.180 lat (msec) : 50=4.30% 00:32:09.180 cpu : usr=0.39%, sys=0.19%, ctx=535, majf=0, minf=1 00:32:09.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:09.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.180 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:09.180 job2: (groupid=0, jobs=1): err= 0: pid=1224941: Sun Dec 8 06:35:59 2024 00:32:09.180 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:32:09.180 slat (nsec): min=8328, max=42858, avg=16947.09, stdev=9363.42 00:32:09.180 clat (usec): min=40827, max=41917, avg=41036.86, stdev=213.94 00:32:09.180 lat (usec): min=40861, max=41930, avg=41053.81, stdev=211.70 00:32:09.180 clat percentiles (usec): 00:32:09.180 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:09.180 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:09.180 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:09.180 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:32:09.180 | 99.99th=[41681] 00:32:09.180 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:32:09.180 slat (nsec): min=9065, max=28475, avg=10029.14, stdev=1284.26 00:32:09.180 clat (usec): min=167, max=268, avg=197.24, stdev=18.69 00:32:09.180 lat (usec): min=177, max=297, avg=207.27, stdev=18.86 00:32:09.180 clat percentiles (usec): 00:32:09.180 | 1.00th=[ 172], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:32:09.180 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 202], 00:32:09.180 | 70.00th=[ 215], 80.00th=[ 217], 90.00th=[ 223], 95.00th=[ 227], 00:32:09.180 | 99.00th=[ 235], 99.50th=[ 237], 99.90th=[ 269], 99.95th=[ 269], 00:32:09.180 | 99.99th=[ 269] 00:32:09.180 bw ( KiB/s): min= 4096, max= 4096, per=28.09%, avg=4096.00, stdev= 0.00, samples=1 00:32:09.180 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:09.180 lat (usec) : 250=95.69%, 500=0.19% 00:32:09.180 lat (msec) : 50=4.12% 00:32:09.180 cpu : usr=0.59%, sys=0.40%, ctx=534, majf=0, minf=1 00:32:09.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:09.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.180 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:09.180 job3: (groupid=0, jobs=1): err= 0: pid=1224942: Sun Dec 8 06:35:59 2024 00:32:09.180 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:09.180 slat (nsec): min=4428, max=71681, avg=10133.72, stdev=8837.78 00:32:09.180 clat (usec): min=179, max=617, avg=258.07, stdev=60.21 00:32:09.180 lat (usec): min=200, max=648, avg=268.21, stdev=67.51 00:32:09.180 clat percentiles (usec): 00:32:09.180 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 219], 00:32:09.180 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 245], 00:32:09.180 | 70.00th=[ 258], 80.00th=[ 281], 90.00th=[ 343], 95.00th=[ 408], 00:32:09.180 | 99.00th=[ 474], 99.50th=[ 482], 99.90th=[ 619], 99.95th=[ 619], 00:32:09.180 | 99.99th=[ 619] 00:32:09.180 write: IOPS=2245, BW=8983KiB/s (9199kB/s)(8992KiB/1001msec); 0 zone resets 00:32:09.180 slat (nsec): min=6000, max=39934, avg=9432.19, stdev=4201.97 00:32:09.181 clat (usec): min=146, max=364, avg=185.51, stdev=29.91 00:32:09.181 lat (usec): min=153, max=387, avg=194.94, stdev=30.64 00:32:09.181 clat percentiles (usec): 00:32:09.181 | 1.00th=[ 151], 5.00th=[ 153], 10.00th=[ 153], 20.00th=[ 157], 00:32:09.181 | 30.00th=[ 161], 40.00th=[ 172], 50.00th=[ 180], 60.00th=[ 188], 00:32:09.181 | 70.00th=[ 198], 80.00th=[ 217], 90.00th=[ 233], 95.00th=[ 241], 00:32:09.181 | 99.00th=[ 251], 99.50th=[ 258], 99.90th=[ 306], 99.95th=[ 310], 00:32:09.181 | 99.99th=[ 363] 00:32:09.181 bw ( KiB/s): min=10192, max=10192, per=69.89%, avg=10192.00, stdev= 0.00, samples=1 00:32:09.181 iops : min= 2548, max= 2548, avg=2548.00, stdev= 0.00, samples=1 00:32:09.181 lat (usec) : 250=82.47%, 500=17.36%, 750=0.16% 00:32:09.181 cpu : usr=1.70%, sys=4.90%, ctx=4296, majf=0, minf=2 00:32:09.181 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:09.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.181 issued rwts: total=2048,2248,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.181 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:09.181 00:32:09.181 Run status group 0 (all jobs): 00:32:09.181 READ: bw=9218KiB/s (9439kB/s), 87.0KiB/s-8184KiB/s (89.1kB/s-8380kB/s), io=9568KiB (9798kB), run=1001-1038msec 00:32:09.181 WRITE: bw=14.2MiB/s (14.9MB/s), 1973KiB/s-8983KiB/s (2020kB/s-9199kB/s), io=14.8MiB (15.5MB), run=1001-1038msec 00:32:09.181 00:32:09.181 Disk stats (read/write): 00:32:09.181 nvme0n1: ios=68/512, merge=0/0, ticks=767/114, in_queue=881, util=90.88% 00:32:09.181 nvme0n2: ios=66/512, merge=0/0, ticks=844/88, in_queue=932, util=95.32% 00:32:09.181 nvme0n3: ios=18/512, merge=0/0, ticks=740/98, in_queue=838, util=88.92% 00:32:09.181 nvme0n4: ios=1795/2048, merge=0/0, ticks=428/371, in_queue=799, util=89.67% 00:32:09.181 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:09.181 [global] 00:32:09.181 thread=1 00:32:09.181 invalidate=1 00:32:09.181 rw=randwrite 00:32:09.181 time_based=1 00:32:09.181 runtime=1 00:32:09.181 ioengine=libaio 00:32:09.181 direct=1 00:32:09.181 bs=4096 00:32:09.181 iodepth=1 00:32:09.181 norandommap=0 00:32:09.181 numjobs=1 00:32:09.181 00:32:09.181 verify_dump=1 00:32:09.181 verify_backlog=512 00:32:09.181 verify_state_save=0 00:32:09.181 do_verify=1 00:32:09.181 verify=crc32c-intel 00:32:09.181 [job0] 00:32:09.181 filename=/dev/nvme0n1 00:32:09.181 [job1] 00:32:09.181 filename=/dev/nvme0n2 00:32:09.181 [job2] 00:32:09.181 filename=/dev/nvme0n3 00:32:09.181 [job3] 00:32:09.181 filename=/dev/nvme0n4 00:32:09.181 Could not set queue depth (nvme0n1) 00:32:09.181 Could not set queue depth (nvme0n2) 00:32:09.181 Could not set queue depth (nvme0n3) 00:32:09.181 Could not set queue depth (nvme0n4) 00:32:09.438 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:09.438 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:09.438 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:09.439 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:09.439 fio-3.35 00:32:09.439 Starting 4 threads 00:32:10.809 00:32:10.809 job0: (groupid=0, jobs=1): err= 0: pid=1225162: Sun Dec 8 06:36:00 2024 00:32:10.809 read: IOPS=521, BW=2088KiB/s (2138kB/s)(2092KiB/1002msec) 00:32:10.809 slat (nsec): min=6611, max=25597, avg=8189.43, stdev=2474.19 00:32:10.809 clat (usec): min=193, max=41044, avg=1470.68, stdev=7017.64 00:32:10.809 lat (usec): min=208, max=41059, avg=1478.87, stdev=7018.67 00:32:10.809 clat percentiles (usec): 00:32:10.809 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 210], 00:32:10.809 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 229], 00:32:10.809 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 265], 00:32:10.809 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:10.809 | 99.99th=[41157] 00:32:10.809 write: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec); 0 zone resets 00:32:10.809 slat (nsec): min=6782, max=38580, avg=10768.30, stdev=3485.26 00:32:10.809 clat (usec): min=148, max=2079, avg=207.91, stdev=94.45 00:32:10.809 lat (usec): min=156, max=2092, avg=218.68, stdev=95.13 00:32:10.809 clat percentiles (usec): 00:32:10.809 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 169], 00:32:10.809 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 194], 60.00th=[ 206], 00:32:10.809 | 70.00th=[ 217], 80.00th=[ 235], 90.00th=[ 260], 95.00th=[ 293], 00:32:10.809 | 99.00th=[ 367], 99.50th=[ 400], 99.90th=[ 1942], 99.95th=[ 2073], 00:32:10.809 | 99.99th=[ 2073] 00:32:10.809 bw ( KiB/s): min= 2032, max= 6160, per=29.54%, avg=4096.00, stdev=2918.94, samples=2 00:32:10.809 iops : min= 508, max= 1540, avg=1024.00, stdev=729.73, samples=2 00:32:10.809 lat (usec) : 250=88.75%, 500=9.89%, 750=0.13%, 1000=0.06% 00:32:10.809 lat (msec) : 2=0.06%, 4=0.06%, 50=1.03% 00:32:10.809 cpu : usr=0.50%, sys=1.70%, ctx=1550, majf=0, minf=1 00:32:10.809 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:10.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.809 issued rwts: total=523,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.809 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:10.809 job1: (groupid=0, jobs=1): err= 0: pid=1225163: Sun Dec 8 06:36:00 2024 00:32:10.809 read: IOPS=1000, BW=4004KiB/s (4100kB/s)(4140KiB/1034msec) 00:32:10.809 slat (nsec): min=6317, max=16147, avg=7465.56, stdev=1267.91 00:32:10.809 clat (usec): min=194, max=41029, avg=680.40, stdev=4175.29 00:32:10.809 lat (usec): min=200, max=41043, avg=687.86, stdev=4175.95 00:32:10.809 clat percentiles (usec): 00:32:10.809 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:32:10.809 | 30.00th=[ 229], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 255], 00:32:10.809 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 289], 00:32:10.809 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:10.809 | 99.99th=[41157] 00:32:10.809 write: IOPS=1485, BW=5942KiB/s (6085kB/s)(6144KiB/1034msec); 0 zone resets 00:32:10.809 slat (nsec): min=7955, max=56130, avg=10414.36, stdev=3362.34 00:32:10.809 clat (usec): min=137, max=3568, avg=194.76, stdev=142.54 00:32:10.809 lat (usec): min=145, max=3599, avg=205.17, stdev=143.07 00:32:10.809 clat percentiles (usec): 00:32:10.809 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 149], 00:32:10.809 | 30.00th=[ 159], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 184], 00:32:10.809 | 70.00th=[ 204], 80.00th=[ 225], 90.00th=[ 247], 95.00th=[ 285], 00:32:10.809 | 99.00th=[ 338], 99.50th=[ 404], 99.90th=[ 3228], 99.95th=[ 3556], 00:32:10.809 | 99.99th=[ 3556] 00:32:10.809 bw ( KiB/s): min= 3328, max= 8960, per=44.31%, avg=6144.00, stdev=3982.43, samples=2 00:32:10.809 iops : min= 832, max= 2240, avg=1536.00, stdev=995.61, samples=2 00:32:10.809 lat (usec) : 250=76.00%, 500=23.14%, 750=0.19%, 1000=0.08% 00:32:10.809 lat (msec) : 4=0.16%, 50=0.43% 00:32:10.809 cpu : usr=1.36%, sys=3.10%, ctx=2572, majf=0, minf=1 00:32:10.810 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:10.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.810 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:10.810 job2: (groupid=0, jobs=1): err= 0: pid=1225164: Sun Dec 8 06:36:00 2024 00:32:10.810 read: IOPS=23, BW=95.5KiB/s (97.8kB/s)(96.0KiB/1005msec) 00:32:10.810 slat (nsec): min=8072, max=21361, avg=13566.00, stdev=3174.55 00:32:10.810 clat (usec): min=334, max=43354, avg=36235.34, stdev=13312.38 00:32:10.810 lat (usec): min=349, max=43369, avg=36248.90, stdev=13311.04 00:32:10.810 clat percentiles (usec): 00:32:10.810 | 1.00th=[ 334], 5.00th=[ 474], 10.00th=[ 4686], 20.00th=[40633], 00:32:10.810 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:10.810 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:32:10.810 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:32:10.810 | 99.99th=[43254] 00:32:10.810 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:32:10.810 slat (nsec): min=8948, max=50974, avg=11480.58, stdev=3239.69 00:32:10.810 clat (usec): min=166, max=986, avg=247.88, stdev=82.10 00:32:10.810 lat (usec): min=178, max=998, avg=259.36, stdev=82.33 00:32:10.810 clat percentiles (usec): 00:32:10.810 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 204], 00:32:10.810 | 30.00th=[ 212], 40.00th=[ 223], 50.00th=[ 233], 60.00th=[ 245], 00:32:10.810 | 70.00th=[ 258], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 330], 00:32:10.810 | 99.00th=[ 635], 99.50th=[ 930], 99.90th=[ 988], 99.95th=[ 988], 00:32:10.810 | 99.99th=[ 988] 00:32:10.810 bw ( KiB/s): min= 4096, max= 4096, per=29.54%, avg=4096.00, stdev= 0.00, samples=1 00:32:10.810 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:10.810 lat (usec) : 250=61.01%, 500=33.58%, 750=0.56%, 1000=0.75% 00:32:10.810 lat (msec) : 10=0.19%, 50=3.92% 00:32:10.810 cpu : usr=0.30%, sys=0.80%, ctx=537, majf=0, minf=1 00:32:10.810 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:10.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.810 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:10.810 job3: (groupid=0, jobs=1): err= 0: pid=1225165: Sun Dec 8 06:36:00 2024 00:32:10.810 read: IOPS=31, BW=124KiB/s (127kB/s)(128KiB/1032msec) 00:32:10.810 slat (nsec): min=7413, max=36509, avg=14087.69, stdev=6013.34 00:32:10.810 clat (usec): min=249, max=41057, avg=28623.45, stdev=18666.60 00:32:10.810 lat (usec): min=261, max=41071, avg=28637.53, stdev=18668.15 00:32:10.810 clat percentiles (usec): 00:32:10.810 | 1.00th=[ 249], 5.00th=[ 281], 10.00th=[ 314], 20.00th=[ 424], 00:32:10.810 | 30.00th=[11863], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:10.810 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:10.810 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:10.810 | 99.99th=[41157] 00:32:10.810 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:32:10.810 slat (nsec): min=8003, max=44668, avg=10852.99, stdev=3949.61 00:32:10.810 clat (usec): min=144, max=1018, avg=211.17, stdev=43.70 00:32:10.810 lat (usec): min=169, max=1052, avg=222.02, stdev=44.43 00:32:10.810 clat percentiles (usec): 00:32:10.810 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 188], 00:32:10.810 | 30.00th=[ 196], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 215], 00:32:10.810 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 243], 95.00th=[ 249], 00:32:10.810 | 99.00th=[ 277], 99.50th=[ 310], 99.90th=[ 1020], 99.95th=[ 1020], 00:32:10.810 | 99.99th=[ 1020] 00:32:10.810 bw ( KiB/s): min= 4096, max= 4096, per=29.54%, avg=4096.00, stdev= 0.00, samples=1 00:32:10.810 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:10.810 lat (usec) : 250=89.89%, 500=5.51%, 750=0.18% 00:32:10.810 lat (msec) : 2=0.18%, 20=0.18%, 50=4.04% 00:32:10.810 cpu : usr=0.58%, sys=0.48%, ctx=545, majf=0, minf=1 00:32:10.810 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:10.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.810 issued rwts: total=32,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:10.810 00:32:10.810 Run status group 0 (all jobs): 00:32:10.810 READ: bw=6244KiB/s (6394kB/s), 95.5KiB/s-4004KiB/s (97.8kB/s-4100kB/s), io=6456KiB (6611kB), run=1002-1034msec 00:32:10.810 WRITE: bw=13.5MiB/s (14.2MB/s), 1984KiB/s-5942KiB/s (2032kB/s-6085kB/s), io=14.0MiB (14.7MB), run=1002-1034msec 00:32:10.810 00:32:10.810 Disk stats (read/write): 00:32:10.810 nvme0n1: ios=568/1024, merge=0/0, ticks=680/212, in_queue=892, util=86.27% 00:32:10.810 nvme0n2: ios=1079/1536, merge=0/0, ticks=686/291, in_queue=977, util=90.26% 00:32:10.810 nvme0n3: ios=76/512, merge=0/0, ticks=1630/125, in_queue=1755, util=93.67% 00:32:10.810 nvme0n4: ios=54/512, merge=0/0, ticks=1618/104, in_queue=1722, util=94.46% 00:32:10.810 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:10.810 [global] 00:32:10.810 thread=1 00:32:10.810 invalidate=1 00:32:10.810 rw=write 00:32:10.810 time_based=1 00:32:10.810 runtime=1 00:32:10.810 ioengine=libaio 00:32:10.810 direct=1 00:32:10.810 bs=4096 00:32:10.810 iodepth=128 00:32:10.810 norandommap=0 00:32:10.810 numjobs=1 00:32:10.810 00:32:10.810 verify_dump=1 00:32:10.810 verify_backlog=512 00:32:10.810 verify_state_save=0 00:32:10.810 do_verify=1 00:32:10.810 verify=crc32c-intel 00:32:10.810 [job0] 00:32:10.810 filename=/dev/nvme0n1 00:32:10.810 [job1] 00:32:10.810 filename=/dev/nvme0n2 00:32:10.810 [job2] 00:32:10.810 filename=/dev/nvme0n3 00:32:10.810 [job3] 00:32:10.810 filename=/dev/nvme0n4 00:32:10.810 Could not set queue depth (nvme0n1) 00:32:10.810 Could not set queue depth (nvme0n2) 00:32:10.810 Could not set queue depth (nvme0n3) 00:32:10.810 Could not set queue depth (nvme0n4) 00:32:10.810 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.810 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.810 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.810 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.810 fio-3.35 00:32:10.810 Starting 4 threads 00:32:12.179 00:32:12.179 job0: (groupid=0, jobs=1): err= 0: pid=1225571: Sun Dec 8 06:36:02 2024 00:32:12.179 read: IOPS=4359, BW=17.0MiB/s (17.9MB/s)(17.1MiB/1004msec) 00:32:12.179 slat (usec): min=2, max=14097, avg=82.41, stdev=720.54 00:32:12.179 clat (usec): min=1915, max=42444, avg=12957.05, stdev=5222.94 00:32:12.179 lat (usec): min=1919, max=42450, avg=13039.45, stdev=5283.56 00:32:12.179 clat percentiles (usec): 00:32:12.179 | 1.00th=[ 3064], 5.00th=[ 4555], 10.00th=[ 6587], 20.00th=[10159], 00:32:12.179 | 30.00th=[11207], 40.00th=[11994], 50.00th=[12518], 60.00th=[13042], 00:32:12.179 | 70.00th=[14353], 80.00th=[15926], 90.00th=[18744], 95.00th=[21365], 00:32:12.179 | 99.00th=[32375], 99.50th=[36963], 99.90th=[42206], 99.95th=[42206], 00:32:12.179 | 99.99th=[42206] 00:32:12.179 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:32:12.179 slat (usec): min=3, max=12321, avg=102.51, stdev=665.88 00:32:12.179 clat (usec): min=373, max=44271, avg=15371.58, stdev=9075.66 00:32:12.179 lat (usec): min=380, max=44277, avg=15474.09, stdev=9143.18 00:32:12.179 clat percentiles (usec): 00:32:12.179 | 1.00th=[ 1876], 5.00th=[ 4752], 10.00th=[ 6849], 20.00th=[ 9241], 00:32:12.179 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12125], 60.00th=[13173], 00:32:12.179 | 70.00th=[16057], 80.00th=[23200], 90.00th=[28705], 95.00th=[36439], 00:32:12.179 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:32:12.179 | 99.99th=[44303] 00:32:12.179 bw ( KiB/s): min=16384, max=20480, per=26.92%, avg=18432.00, stdev=2896.31, samples=2 00:32:12.179 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:32:12.179 lat (usec) : 500=0.02%, 1000=0.06% 00:32:12.179 lat (msec) : 2=0.86%, 4=2.74%, 10=18.27%, 20=62.08%, 50=15.97% 00:32:12.179 cpu : usr=3.39%, sys=4.39%, ctx=395, majf=0, minf=2 00:32:12.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:12.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:12.179 issued rwts: total=4377,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:12.179 job1: (groupid=0, jobs=1): err= 0: pid=1225572: Sun Dec 8 06:36:02 2024 00:32:12.179 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:32:12.179 slat (usec): min=2, max=30757, avg=112.93, stdev=954.53 00:32:12.179 clat (usec): min=2515, max=75204, avg=12954.08, stdev=7496.01 00:32:12.179 lat (usec): min=2522, max=82699, avg=13067.01, stdev=7595.27 00:32:12.179 clat percentiles (usec): 00:32:12.179 | 1.00th=[ 4490], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10290], 00:32:12.179 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11863], 00:32:12.179 | 70.00th=[12649], 80.00th=[13435], 90.00th=[15795], 95.00th=[18482], 00:32:12.179 | 99.00th=[58983], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:32:12.179 | 99.99th=[74974] 00:32:12.179 write: IOPS=4287, BW=16.7MiB/s (17.6MB/s)(16.9MiB/1009msec); 0 zone resets 00:32:12.179 slat (usec): min=4, max=23611, avg=117.22, stdev=955.48 00:32:12.179 clat (usec): min=1928, max=82840, avg=16374.61, stdev=15233.70 00:32:12.179 lat (usec): min=1934, max=82859, avg=16491.83, stdev=15314.14 00:32:12.179 clat percentiles (usec): 00:32:12.179 | 1.00th=[ 3163], 5.00th=[ 7242], 10.00th=[ 9110], 20.00th=[10159], 00:32:12.179 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11600], 00:32:12.179 | 70.00th=[12125], 80.00th=[13042], 90.00th=[41157], 95.00th=[60031], 00:32:12.179 | 99.00th=[78119], 99.50th=[82314], 99.90th=[82314], 99.95th=[82314], 00:32:12.179 | 99.99th=[83362] 00:32:12.179 bw ( KiB/s): min=12288, max=21304, per=24.53%, avg=16796.00, stdev=6375.27, samples=2 00:32:12.180 iops : min= 3072, max= 5326, avg=4199.00, stdev=1593.82, samples=2 00:32:12.180 lat (msec) : 2=0.07%, 4=0.90%, 10=11.28%, 20=79.81%, 50=3.46% 00:32:12.180 lat (msec) : 100=4.48% 00:32:12.180 cpu : usr=3.47%, sys=6.05%, ctx=418, majf=0, minf=1 00:32:12.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:12.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:12.180 issued rwts: total=4096,4326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:12.180 job2: (groupid=0, jobs=1): err= 0: pid=1225575: Sun Dec 8 06:36:02 2024 00:32:12.180 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:32:12.180 slat (usec): min=2, max=22096, avg=163.30, stdev=1047.08 00:32:12.180 clat (msec): min=7, max=102, avg=22.21, stdev=15.33 00:32:12.180 lat (msec): min=7, max=104, avg=22.37, stdev=15.39 00:32:12.180 clat percentiles (msec): 00:32:12.180 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 14], 00:32:12.180 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 20], 00:32:12.180 | 70.00th=[ 21], 80.00th=[ 27], 90.00th=[ 46], 95.00th=[ 51], 00:32:12.180 | 99.00th=[ 89], 99.50th=[ 97], 99.90th=[ 101], 99.95th=[ 104], 00:32:12.180 | 99.99th=[ 104] 00:32:12.180 write: IOPS=3211, BW=12.5MiB/s (13.2MB/s)(12.6MiB/1004msec); 0 zone resets 00:32:12.180 slat (usec): min=3, max=14135, avg=149.51, stdev=786.79 00:32:12.180 clat (usec): min=738, max=96375, avg=18209.17, stdev=10431.22 00:32:12.180 lat (usec): min=7978, max=96384, avg=18358.68, stdev=10522.75 00:32:12.180 clat percentiles (usec): 00:32:12.180 | 1.00th=[ 9896], 5.00th=[11469], 10.00th=[12518], 20.00th=[12911], 00:32:12.180 | 30.00th=[13304], 40.00th=[13698], 50.00th=[16188], 60.00th=[16581], 00:32:12.180 | 70.00th=[18220], 80.00th=[22676], 90.00th=[23987], 95.00th=[27919], 00:32:12.180 | 99.00th=[83362], 99.50th=[93848], 99.90th=[94897], 99.95th=[95945], 00:32:12.180 | 99.99th=[95945] 00:32:12.180 bw ( KiB/s): min= 8384, max=16384, per=18.09%, avg=12384.00, stdev=5656.85, samples=2 00:32:12.180 iops : min= 2096, max= 4096, avg=3096.00, stdev=1414.21, samples=2 00:32:12.180 lat (usec) : 750=0.02% 00:32:12.180 lat (msec) : 10=2.13%, 20=67.03%, 50=27.13%, 100=3.53%, 250=0.17% 00:32:12.180 cpu : usr=1.89%, sys=4.19%, ctx=317, majf=0, minf=2 00:32:12.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:32:12.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:12.180 issued rwts: total=3072,3224,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:12.180 job3: (groupid=0, jobs=1): err= 0: pid=1225576: Sun Dec 8 06:36:02 2024 00:32:12.180 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:32:12.180 slat (usec): min=2, max=6293, avg=99.87, stdev=615.09 00:32:12.180 clat (usec): min=5539, max=19315, avg=12845.80, stdev=2083.28 00:32:12.180 lat (usec): min=5543, max=19347, avg=12945.68, stdev=2111.80 00:32:12.180 clat percentiles (usec): 00:32:12.180 | 1.00th=[ 7308], 5.00th=[10028], 10.00th=[10683], 20.00th=[11207], 00:32:12.180 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12649], 60.00th=[13042], 00:32:12.180 | 70.00th=[13698], 80.00th=[14746], 90.00th=[15533], 95.00th=[16450], 00:32:12.180 | 99.00th=[18220], 99.50th=[18482], 99.90th=[19006], 99.95th=[19006], 00:32:12.180 | 99.99th=[19268] 00:32:12.180 write: IOPS=5098, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:32:12.180 slat (usec): min=3, max=12854, avg=99.83, stdev=649.79 00:32:12.180 clat (usec): min=669, max=45730, avg=13246.18, stdev=4370.65 00:32:12.180 lat (usec): min=5284, max=45735, avg=13346.01, stdev=4402.73 00:32:12.180 clat percentiles (usec): 00:32:12.180 | 1.00th=[ 6980], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[11731], 00:32:12.180 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780], 00:32:12.180 | 70.00th=[13304], 80.00th=[14091], 90.00th=[15664], 95.00th=[16188], 00:32:12.180 | 99.00th=[39060], 99.50th=[40109], 99.90th=[41681], 99.95th=[45876], 00:32:12.180 | 99.99th=[45876] 00:32:12.180 bw ( KiB/s): min=19408, max=20480, per=29.13%, avg=19944.00, stdev=758.02, samples=2 00:32:12.180 iops : min= 4852, max= 5120, avg=4986.00, stdev=189.50, samples=2 00:32:12.180 lat (usec) : 750=0.01% 00:32:12.180 lat (msec) : 10=6.74%, 20=91.43%, 50=1.82% 00:32:12.180 cpu : usr=4.29%, sys=5.49%, ctx=363, majf=0, minf=2 00:32:12.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:12.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:12.180 issued rwts: total=4608,5114,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:12.180 00:32:12.180 Run status group 0 (all jobs): 00:32:12.180 READ: bw=62.5MiB/s (65.6MB/s), 12.0MiB/s-17.9MiB/s (12.5MB/s-18.8MB/s), io=63.1MiB (66.2MB), run=1003-1009msec 00:32:12.180 WRITE: bw=66.9MiB/s (70.1MB/s), 12.5MiB/s-19.9MiB/s (13.2MB/s-20.9MB/s), io=67.5MiB (70.7MB), run=1003-1009msec 00:32:12.180 00:32:12.180 Disk stats (read/write): 00:32:12.180 nvme0n1: ios=3634/3727, merge=0/0, ticks=48054/55752, in_queue=103806, util=86.77% 00:32:12.180 nvme0n2: ios=3241/3584, merge=0/0, ticks=23739/26725, in_queue=50464, util=96.95% 00:32:12.180 nvme0n3: ios=2617/2791, merge=0/0, ticks=14373/14864, in_queue=29237, util=90.71% 00:32:12.180 nvme0n4: ios=4156/4428, merge=0/0, ticks=22836/21866, in_queue=44702, util=97.79% 00:32:12.180 06:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:12.180 [global] 00:32:12.180 thread=1 00:32:12.180 invalidate=1 00:32:12.180 rw=randwrite 00:32:12.180 time_based=1 00:32:12.180 runtime=1 00:32:12.180 ioengine=libaio 00:32:12.180 direct=1 00:32:12.180 bs=4096 00:32:12.180 iodepth=128 00:32:12.180 norandommap=0 00:32:12.180 numjobs=1 00:32:12.180 00:32:12.180 verify_dump=1 00:32:12.180 verify_backlog=512 00:32:12.180 verify_state_save=0 00:32:12.180 do_verify=1 00:32:12.180 verify=crc32c-intel 00:32:12.180 [job0] 00:32:12.180 filename=/dev/nvme0n1 00:32:12.180 [job1] 00:32:12.180 filename=/dev/nvme0n2 00:32:12.180 [job2] 00:32:12.180 filename=/dev/nvme0n3 00:32:12.180 [job3] 00:32:12.180 filename=/dev/nvme0n4 00:32:12.180 Could not set queue depth (nvme0n1) 00:32:12.180 Could not set queue depth (nvme0n2) 00:32:12.180 Could not set queue depth (nvme0n3) 00:32:12.180 Could not set queue depth (nvme0n4) 00:32:12.180 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:12.180 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:12.180 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:12.180 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:12.180 fio-3.35 00:32:12.180 Starting 4 threads 00:32:13.553 00:32:13.553 job0: (groupid=0, jobs=1): err= 0: pid=1225859: Sun Dec 8 06:36:03 2024 00:32:13.553 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec) 00:32:13.553 slat (usec): min=2, max=27084, avg=136.50, stdev=1102.61 00:32:13.553 clat (msec): min=3, max=124, avg=16.04, stdev=14.54 00:32:13.553 lat (msec): min=3, max=124, avg=16.18, stdev=14.70 00:32:13.553 clat percentiles (msec): 00:32:13.553 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:32:13.553 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:32:13.553 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 31], 95.00th=[ 34], 00:32:13.553 | 99.00th=[ 102], 99.50th=[ 114], 99.90th=[ 125], 99.95th=[ 125], 00:32:13.553 | 99.99th=[ 125] 00:32:13.553 write: IOPS=3200, BW=12.5MiB/s (13.1MB/s)(12.7MiB/1014msec); 0 zone resets 00:32:13.553 slat (usec): min=3, max=15999, avg=166.78, stdev=1050.66 00:32:13.553 clat (usec): min=190, max=126915, avg=24451.82, stdev=31931.92 00:32:13.553 lat (usec): min=836, max=126927, avg=24618.61, stdev=32148.44 00:32:13.553 clat percentiles (msec): 00:32:13.553 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 9], 00:32:13.553 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:32:13.553 | 70.00th=[ 13], 80.00th=[ 21], 90.00th=[ 96], 95.00th=[ 106], 00:32:13.553 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 128], 99.95th=[ 128], 00:32:13.553 | 99.99th=[ 128] 00:32:13.553 bw ( KiB/s): min= 3768, max=21168, per=20.06%, avg=12468.00, stdev=12303.66, samples=2 00:32:13.553 iops : min= 942, max= 5292, avg=3117.00, stdev=3075.91, samples=2 00:32:13.553 lat (usec) : 250=0.02%, 1000=0.24% 00:32:13.553 lat (msec) : 2=0.13%, 4=1.52%, 10=22.08%, 20=58.29%, 50=9.07% 00:32:13.553 lat (msec) : 100=4.01%, 250=4.65% 00:32:13.553 cpu : usr=2.57%, sys=4.05%, ctx=250, majf=0, minf=2 00:32:13.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:32:13.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:13.553 issued rwts: total=3072,3245,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.553 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:13.553 job1: (groupid=0, jobs=1): err= 0: pid=1225860: Sun Dec 8 06:36:03 2024 00:32:13.553 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:32:13.553 slat (usec): min=3, max=21061, avg=114.61, stdev=882.75 00:32:13.553 clat (usec): min=4121, max=83582, avg=15348.86, stdev=13038.90 00:32:13.553 lat (usec): min=4125, max=85056, avg=15463.47, stdev=13132.83 00:32:13.553 clat percentiles (usec): 00:32:13.553 | 1.00th=[ 5669], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9503], 00:32:13.553 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10552], 60.00th=[11600], 00:32:13.553 | 70.00th=[12518], 80.00th=[14091], 90.00th=[32113], 95.00th=[51643], 00:32:13.553 | 99.00th=[66323], 99.50th=[73925], 99.90th=[81265], 99.95th=[81265], 00:32:13.553 | 99.99th=[83362] 00:32:13.553 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1003msec); 0 zone resets 00:32:13.553 slat (usec): min=4, max=21357, avg=90.19, stdev=634.17 00:32:13.553 clat (usec): min=560, max=76961, avg=12224.09, stdev=8418.83 00:32:13.553 lat (usec): min=1241, max=76972, avg=12314.28, stdev=8484.03 00:32:13.553 clat percentiles (usec): 00:32:13.553 | 1.00th=[ 5866], 5.00th=[ 6587], 10.00th=[ 8225], 20.00th=[ 9372], 00:32:13.553 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:32:13.553 | 70.00th=[11338], 80.00th=[11863], 90.00th=[14091], 95.00th=[19530], 00:32:13.553 | 99.00th=[54264], 99.50th=[56886], 99.90th=[76022], 99.95th=[76022], 00:32:13.553 | 99.99th=[77071] 00:32:13.553 bw ( KiB/s): min=16384, max=20480, per=29.65%, avg=18432.00, stdev=2896.31, samples=2 00:32:13.553 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:32:13.553 lat (usec) : 750=0.01% 00:32:13.553 lat (msec) : 2=0.05%, 10=34.31%, 20=57.41%, 50=4.64%, 100=3.58% 00:32:13.553 cpu : usr=5.09%, sys=7.78%, ctx=329, majf=0, minf=1 00:32:13.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:13.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:13.553 issued rwts: total=4608,4617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.553 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:13.553 job2: (groupid=0, jobs=1): err= 0: pid=1225861: Sun Dec 8 06:36:03 2024 00:32:13.553 read: IOPS=3850, BW=15.0MiB/s (15.8MB/s)(15.8MiB/1049msec) 00:32:13.553 slat (usec): min=2, max=17127, avg=107.94, stdev=869.10 00:32:13.553 clat (usec): min=5602, max=64772, avg=15981.45, stdev=8386.83 00:32:13.553 lat (usec): min=5610, max=64785, avg=16089.39, stdev=8431.91 00:32:13.553 clat percentiles (usec): 00:32:13.553 | 1.00th=[ 6194], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[10945], 00:32:13.553 | 30.00th=[11469], 40.00th=[12387], 50.00th=[13173], 60.00th=[14353], 00:32:13.553 | 70.00th=[16712], 80.00th=[20055], 90.00th=[23987], 95.00th=[27395], 00:32:13.553 | 99.00th=[53740], 99.50th=[53740], 99.90th=[64750], 99.95th=[64750], 00:32:13.553 | 99.99th=[64750] 00:32:13.553 write: IOPS=3904, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1049msec); 0 zone resets 00:32:13.553 slat (usec): min=3, max=17132, avg=125.25, stdev=902.48 00:32:13.553 clat (usec): min=837, max=59542, avg=16564.81, stdev=9458.50 00:32:13.553 lat (usec): min=873, max=59551, avg=16690.06, stdev=9527.42 00:32:13.553 clat percentiles (usec): 00:32:13.553 | 1.00th=[ 6259], 5.00th=[ 6980], 10.00th=[ 7701], 20.00th=[10159], 00:32:13.553 | 30.00th=[10814], 40.00th=[12780], 50.00th=[14091], 60.00th=[15926], 00:32:13.553 | 70.00th=[17695], 80.00th=[22938], 90.00th=[28181], 95.00th=[31327], 00:32:13.553 | 99.00th=[56361], 99.50th=[58983], 99.90th=[59507], 99.95th=[59507], 00:32:13.553 | 99.99th=[59507] 00:32:13.553 bw ( KiB/s): min=13616, max=19152, per=26.36%, avg=16384.00, stdev=3914.54, samples=2 00:32:13.553 iops : min= 3404, max= 4788, avg=4096.00, stdev=978.64, samples=2 00:32:13.553 lat (usec) : 1000=0.01% 00:32:13.553 lat (msec) : 4=0.26%, 10=12.44%, 20=65.24%, 50=19.43%, 100=2.62% 00:32:13.553 cpu : usr=4.77%, sys=7.92%, ctx=242, majf=0, minf=2 00:32:13.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:13.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:13.554 issued rwts: total=4039,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:13.554 job3: (groupid=0, jobs=1): err= 0: pid=1225862: Sun Dec 8 06:36:03 2024 00:32:13.554 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:32:13.554 slat (usec): min=2, max=14065, avg=111.46, stdev=800.52 00:32:13.554 clat (usec): min=5815, max=39150, avg=14848.08, stdev=4684.17 00:32:13.554 lat (usec): min=5826, max=39158, avg=14959.54, stdev=4734.71 00:32:13.554 clat percentiles (usec): 00:32:13.554 | 1.00th=[ 7963], 5.00th=[ 9503], 10.00th=[10945], 20.00th=[11469], 00:32:13.554 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13173], 60.00th=[13960], 00:32:13.554 | 70.00th=[15664], 80.00th=[17433], 90.00th=[22152], 95.00th=[25035], 00:32:13.554 | 99.00th=[28705], 99.50th=[28967], 99.90th=[39060], 99.95th=[39060], 00:32:13.554 | 99.99th=[39060] 00:32:13.554 write: IOPS=4322, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1005msec); 0 zone resets 00:32:13.554 slat (usec): min=3, max=12366, avg=111.43, stdev=692.20 00:32:13.554 clat (usec): min=3967, max=53492, avg=15170.29, stdev=6094.37 00:32:13.554 lat (usec): min=4427, max=53512, avg=15281.72, stdev=6149.75 00:32:13.554 clat percentiles (usec): 00:32:13.554 | 1.00th=[ 6587], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[11076], 00:32:13.554 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:32:13.554 | 70.00th=[15926], 80.00th=[19530], 90.00th=[25035], 95.00th=[26346], 00:32:13.554 | 99.00th=[31065], 99.50th=[42206], 99.90th=[45876], 99.95th=[53216], 00:32:13.554 | 99.99th=[53740] 00:32:13.554 bw ( KiB/s): min=14720, max=19016, per=27.14%, avg=16868.00, stdev=3037.73, samples=2 00:32:13.554 iops : min= 3680, max= 4754, avg=4217.00, stdev=759.43, samples=2 00:32:13.554 lat (msec) : 4=0.01%, 10=8.29%, 20=75.47%, 50=16.17%, 100=0.05% 00:32:13.554 cpu : usr=4.48%, sys=9.56%, ctx=390, majf=0, minf=1 00:32:13.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:13.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:13.554 issued rwts: total=4096,4344,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:13.554 00:32:13.554 Run status group 0 (all jobs): 00:32:13.554 READ: bw=58.9MiB/s (61.8MB/s), 11.8MiB/s-17.9MiB/s (12.4MB/s-18.8MB/s), io=61.8MiB (64.8MB), run=1003-1049msec 00:32:13.554 WRITE: bw=60.7MiB/s (63.7MB/s), 12.5MiB/s-18.0MiB/s (13.1MB/s-18.9MB/s), io=63.7MiB (66.8MB), run=1003-1049msec 00:32:13.554 00:32:13.554 Disk stats (read/write): 00:32:13.554 nvme0n1: ios=2585/3063, merge=0/0, ticks=29175/69105, in_queue=98280, util=99.40% 00:32:13.554 nvme0n2: ios=3631/3812, merge=0/0, ticks=36726/34566, in_queue=71292, util=99.70% 00:32:13.554 nvme0n3: ios=3455/3584, merge=0/0, ticks=38726/39704, in_queue=78430, util=99.58% 00:32:13.554 nvme0n4: ios=3391/3584, merge=0/0, ticks=36160/35960, in_queue=72120, util=99.37% 00:32:13.554 06:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:13.554 06:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1225996 00:32:13.554 06:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:13.554 06:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:13.554 [global] 00:32:13.554 thread=1 00:32:13.554 invalidate=1 00:32:13.554 rw=read 00:32:13.554 time_based=1 00:32:13.554 runtime=10 00:32:13.554 ioengine=libaio 00:32:13.554 direct=1 00:32:13.554 bs=4096 00:32:13.554 iodepth=1 00:32:13.554 norandommap=1 00:32:13.554 numjobs=1 00:32:13.554 00:32:13.554 [job0] 00:32:13.554 filename=/dev/nvme0n1 00:32:13.554 [job1] 00:32:13.554 filename=/dev/nvme0n2 00:32:13.554 [job2] 00:32:13.554 filename=/dev/nvme0n3 00:32:13.554 [job3] 00:32:13.554 filename=/dev/nvme0n4 00:32:13.554 Could not set queue depth (nvme0n1) 00:32:13.554 Could not set queue depth (nvme0n2) 00:32:13.554 Could not set queue depth (nvme0n3) 00:32:13.554 Could not set queue depth (nvme0n4) 00:32:13.811 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:13.811 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:13.811 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:13.811 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:13.811 fio-3.35 00:32:13.811 Starting 4 threads 00:32:17.097 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:17.097 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:17.097 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=4169728, buflen=4096 00:32:17.097 fio: pid=1226097, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:17.097 06:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:17.097 06:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:17.097 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=3731456, buflen=4096 00:32:17.097 fio: pid=1226096, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:17.664 06:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:17.664 06:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:17.664 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=6766592, buflen=4096 00:32:17.664 fio: pid=1226094, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:17.923 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=35577856, buflen=4096 00:32:17.923 fio: pid=1226095, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:17.923 06:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:17.923 06:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:17.923 00:32:17.923 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1226094: Sun Dec 8 06:36:07 2024 00:32:17.923 read: IOPS=460, BW=1840KiB/s (1884kB/s)(6608KiB/3591msec) 00:32:17.923 slat (usec): min=5, max=10504, avg=22.41, stdev=350.13 00:32:17.923 clat (usec): min=192, max=41984, avg=2132.47, stdev=8525.67 00:32:17.923 lat (usec): min=199, max=48017, avg=2154.88, stdev=8551.75 00:32:17.923 clat percentiles (usec): 00:32:17.923 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 221], 00:32:17.923 | 30.00th=[ 227], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 253], 00:32:17.923 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 343], 95.00th=[ 537], 00:32:17.923 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:32:17.923 | 99.99th=[42206] 00:32:17.923 bw ( KiB/s): min= 96, max= 5216, per=14.00%, avg=1776.00, stdev=2289.17, samples=6 00:32:17.923 iops : min= 24, max= 1304, avg=444.00, stdev=572.29, samples=6 00:32:17.923 lat (usec) : 250=57.89%, 500=36.30%, 750=1.03% 00:32:17.923 lat (msec) : 2=0.06%, 20=0.06%, 50=4.60% 00:32:17.923 cpu : usr=0.17%, sys=0.58%, ctx=1657, majf=0, minf=2 00:32:17.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.923 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.923 issued rwts: total=1653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:17.923 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1226095: Sun Dec 8 06:36:07 2024 00:32:17.923 read: IOPS=2245, BW=8980KiB/s (9196kB/s)(33.9MiB/3869msec) 00:32:17.923 slat (usec): min=6, max=28675, avg=19.06, stdev=391.24 00:32:17.923 clat (usec): min=186, max=41243, avg=420.64, stdev=2467.39 00:32:17.923 lat (usec): min=193, max=41253, avg=439.70, stdev=2498.65 00:32:17.923 clat percentiles (usec): 00:32:17.923 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 210], 00:32:17.923 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 239], 60.00th=[ 251], 00:32:17.923 | 70.00th=[ 281], 80.00th=[ 302], 90.00th=[ 383], 95.00th=[ 482], 00:32:17.923 | 99.00th=[ 553], 99.50th=[ 627], 99.90th=[41157], 99.95th=[41157], 00:32:17.923 | 99.99th=[41157] 00:32:17.923 bw ( KiB/s): min= 6272, max=17512, per=70.66%, avg=8961.00, stdev=3871.88, samples=7 00:32:17.923 iops : min= 1568, max= 4378, avg=2240.14, stdev=968.01, samples=7 00:32:17.923 lat (usec) : 250=58.90%, 500=38.22%, 750=2.42%, 1000=0.01% 00:32:17.923 lat (msec) : 2=0.01%, 4=0.02%, 10=0.02%, 20=0.01%, 50=0.37% 00:32:17.923 cpu : usr=1.58%, sys=3.31%, ctx=8699, majf=0, minf=1 00:32:17.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.923 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.923 issued rwts: total=8687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:17.923 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1226096: Sun Dec 8 06:36:07 2024 00:32:17.923 read: IOPS=278, BW=1114KiB/s (1140kB/s)(3644KiB/3272msec) 00:32:17.923 slat (nsec): min=5960, max=34334, avg=8446.38, stdev=4206.80 00:32:17.923 clat (usec): min=201, max=41811, avg=3555.70, stdev=11054.27 00:32:17.923 lat (usec): min=207, max=41818, avg=3564.12, stdev=11056.78 00:32:17.923 clat percentiles (usec): 00:32:17.923 | 1.00th=[ 208], 5.00th=[ 221], 10.00th=[ 235], 20.00th=[ 273], 00:32:17.923 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 297], 00:32:17.923 | 70.00th=[ 310], 80.00th=[ 367], 90.00th=[ 429], 95.00th=[41157], 00:32:17.923 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:32:17.923 | 99.99th=[41681] 00:32:17.923 bw ( KiB/s): min= 96, max= 6120, per=8.74%, avg=1109.33, stdev=2454.74, samples=6 00:32:17.923 iops : min= 24, max= 1530, avg=277.33, stdev=613.68, samples=6 00:32:17.923 lat (usec) : 250=15.57%, 500=76.10%, 750=0.22% 00:32:17.923 lat (msec) : 50=8.00% 00:32:17.923 cpu : usr=0.18%, sys=0.31%, ctx=913, majf=0, minf=1 00:32:17.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.923 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.923 issued rwts: total=912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:17.923 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1226097: Sun Dec 8 06:36:07 2024 00:32:17.923 read: IOPS=343, BW=1372KiB/s (1404kB/s)(4072KiB/2969msec) 00:32:17.923 slat (nsec): min=5839, max=44622, avg=9372.74, stdev=4788.43 00:32:17.923 clat (usec): min=207, max=42018, avg=2882.02, stdev=9864.24 00:32:17.923 lat (usec): min=214, max=42031, avg=2891.38, stdev=9866.53 00:32:17.923 clat percentiles (usec): 00:32:17.923 | 1.00th=[ 227], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 245], 00:32:17.923 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 314], 00:32:17.923 | 70.00th=[ 433], 80.00th=[ 465], 90.00th=[ 523], 95.00th=[41157], 00:32:17.923 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:32:17.923 | 99.99th=[42206] 00:32:17.923 bw ( KiB/s): min= 96, max= 2984, per=5.47%, avg=694.40, stdev=1280.12, samples=5 00:32:17.923 iops : min= 24, max= 746, avg=173.60, stdev=320.03, samples=5 00:32:17.923 lat (usec) : 250=25.32%, 500=61.53%, 750=6.67% 00:32:17.923 lat (msec) : 4=0.10%, 50=6.28% 00:32:17.923 cpu : usr=0.13%, sys=0.40%, ctx=1019, majf=0, minf=2 00:32:17.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.924 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.924 issued rwts: total=1019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:17.924 00:32:17.924 Run status group 0 (all jobs): 00:32:17.924 READ: bw=12.4MiB/s (13.0MB/s), 1114KiB/s-8980KiB/s (1140kB/s-9196kB/s), io=47.9MiB (50.2MB), run=2969-3869msec 00:32:17.924 00:32:17.924 Disk stats (read/write): 00:32:17.924 nvme0n1: ios=1687/0, merge=0/0, ticks=4351/0, in_queue=4351, util=98.66% 00:32:17.924 nvme0n2: ios=8724/0, merge=0/0, ticks=4443/0, in_queue=4443, util=98.09% 00:32:17.924 nvme0n3: ios=907/0, merge=0/0, ticks=3067/0, in_queue=3067, util=96.76% 00:32:17.924 nvme0n4: ios=1043/0, merge=0/0, ticks=2876/0, in_queue=2876, util=97.87% 00:32:18.182 06:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:18.182 06:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:18.440 06:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:18.440 06:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:18.698 06:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:18.698 06:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:18.956 06:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:18.956 06:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:19.215 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:19.215 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1225996 00:32:19.215 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:19.215 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:19.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:19.473 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:19.473 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:32:19.473 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:19.473 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:19.474 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:19.474 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:19.474 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:32:19.474 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:19.474 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:19.474 nvmf hotplug test: fio failed as expected 00:32:19.474 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:19.732 rmmod nvme_tcp 00:32:19.732 rmmod nvme_fabrics 00:32:19.732 rmmod nvme_keyring 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1223993 ']' 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1223993 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1223993 ']' 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1223993 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1223993 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1223993' 00:32:19.732 killing process with pid 1223993 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1223993 00:32:19.732 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1223993 00:32:19.991 06:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:19.991 06:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:19.991 06:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:19.991 06:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:19.991 06:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:32:19.991 06:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:19.991 06:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:32:19.991 06:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:19.991 06:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:19.991 06:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.991 06:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:19.991 06:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.532 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:22.532 00:32:22.532 real 0m23.919s 00:32:22.532 user 1m8.945s 00:32:22.532 sys 0m9.316s 00:32:22.532 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:22.532 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:22.532 ************************************ 00:32:22.532 END TEST nvmf_fio_target 00:32:22.532 ************************************ 00:32:22.532 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:22.532 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:22.532 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:22.532 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:22.532 ************************************ 00:32:22.532 START TEST nvmf_bdevio 00:32:22.532 ************************************ 00:32:22.532 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:22.532 * Looking for test storage... 00:32:22.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:22.532 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:22.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.533 --rc genhtml_branch_coverage=1 00:32:22.533 --rc genhtml_function_coverage=1 00:32:22.533 --rc genhtml_legend=1 00:32:22.533 --rc geninfo_all_blocks=1 00:32:22.533 --rc geninfo_unexecuted_blocks=1 00:32:22.533 00:32:22.533 ' 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:22.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.533 --rc genhtml_branch_coverage=1 00:32:22.533 --rc genhtml_function_coverage=1 00:32:22.533 --rc genhtml_legend=1 00:32:22.533 --rc geninfo_all_blocks=1 00:32:22.533 --rc geninfo_unexecuted_blocks=1 00:32:22.533 00:32:22.533 ' 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:22.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.533 --rc genhtml_branch_coverage=1 00:32:22.533 --rc genhtml_function_coverage=1 00:32:22.533 --rc genhtml_legend=1 00:32:22.533 --rc geninfo_all_blocks=1 00:32:22.533 --rc geninfo_unexecuted_blocks=1 00:32:22.533 00:32:22.533 ' 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:22.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.533 --rc genhtml_branch_coverage=1 00:32:22.533 --rc genhtml_function_coverage=1 00:32:22.533 --rc genhtml_legend=1 00:32:22.533 --rc geninfo_all_blocks=1 00:32:22.533 --rc geninfo_unexecuted_blocks=1 00:32:22.533 00:32:22.533 ' 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.533 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:22.534 06:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:24.449 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.449 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:24.450 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:24.450 Found net devices under 0000:84:00.0: cvl_0_0 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:24.450 Found net devices under 0000:84:00.1: cvl_0_1 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:24.450 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:24.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:24.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:32:24.708 00:32:24.708 --- 10.0.0.2 ping statistics --- 00:32:24.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.708 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:24.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:24.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:32:24.708 00:32:24.708 --- 10.0.0.1 ping statistics --- 00:32:24.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.708 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:24.708 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:24.709 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:24.709 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.709 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1229352 00:32:24.709 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:24.709 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1229352 00:32:24.709 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1229352 ']' 00:32:24.709 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.709 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:24.709 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.709 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:24.709 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.709 [2024-12-08 06:36:14.723990] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:24.709 [2024-12-08 06:36:14.725096] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:32:24.709 [2024-12-08 06:36:14.725152] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:24.709 [2024-12-08 06:36:14.797554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:24.967 [2024-12-08 06:36:14.856765] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:24.967 [2024-12-08 06:36:14.856842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:24.967 [2024-12-08 06:36:14.856857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:24.967 [2024-12-08 06:36:14.856869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:24.967 [2024-12-08 06:36:14.856888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:24.967 [2024-12-08 06:36:14.858546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:24.967 [2024-12-08 06:36:14.858610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:24.967 [2024-12-08 06:36:14.858661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:24.967 [2024-12-08 06:36:14.858664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:24.967 [2024-12-08 06:36:14.945542] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:24.967 [2024-12-08 06:36:14.945757] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:24.967 [2024-12-08 06:36:14.946047] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:24.967 [2024-12-08 06:36:14.946608] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:24.967 [2024-12-08 06:36:14.946861] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:24.967 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:24.967 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:24.967 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:24.967 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:24.967 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.967 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:24.967 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:24.967 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.967 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.967 [2024-12-08 06:36:14.991383] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.967 Malloc0 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:24.967 [2024-12-08 06:36:15.063613] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:24.967 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:24.967 { 00:32:24.967 "params": { 00:32:24.967 "name": "Nvme$subsystem", 00:32:24.967 "trtype": "$TEST_TRANSPORT", 00:32:24.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:24.967 "adrfam": "ipv4", 00:32:24.967 "trsvcid": "$NVMF_PORT", 00:32:24.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:24.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:24.967 "hdgst": ${hdgst:-false}, 00:32:24.967 "ddgst": ${ddgst:-false} 00:32:24.967 }, 00:32:24.967 "method": "bdev_nvme_attach_controller" 00:32:24.967 } 00:32:24.968 EOF 00:32:24.968 )") 00:32:24.968 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:24.968 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:24.968 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:24.968 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:24.968 "params": { 00:32:24.968 "name": "Nvme1", 00:32:24.968 "trtype": "tcp", 00:32:24.968 "traddr": "10.0.0.2", 00:32:24.968 "adrfam": "ipv4", 00:32:24.968 "trsvcid": "4420", 00:32:24.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:24.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:24.968 "hdgst": false, 00:32:24.968 "ddgst": false 00:32:24.968 }, 00:32:24.968 "method": "bdev_nvme_attach_controller" 00:32:24.968 }' 00:32:25.226 [2024-12-08 06:36:15.114534] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:32:25.226 [2024-12-08 06:36:15.114622] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1229388 ] 00:32:25.226 [2024-12-08 06:36:15.185131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:25.226 [2024-12-08 06:36:15.250421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.226 [2024-12-08 06:36:15.250476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:25.226 [2024-12-08 06:36:15.250480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.793 I/O targets: 00:32:25.793 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:25.793 00:32:25.793 00:32:25.793 CUnit - A unit testing framework for C - Version 2.1-3 00:32:25.793 http://cunit.sourceforge.net/ 00:32:25.793 00:32:25.793 00:32:25.793 Suite: bdevio tests on: Nvme1n1 00:32:25.793 Test: blockdev write read block ...passed 00:32:25.793 Test: blockdev write zeroes read block ...passed 00:32:25.793 Test: blockdev write zeroes read no split ...passed 00:32:25.793 Test: blockdev write zeroes read split ...passed 00:32:25.793 Test: blockdev write zeroes read split partial ...passed 00:32:25.793 Test: blockdev reset ...[2024-12-08 06:36:15.751909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:25.793 [2024-12-08 06:36:15.752044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8a70 (9): Bad file descriptor 00:32:25.793 [2024-12-08 06:36:15.797278] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:25.793 passed 00:32:25.793 Test: blockdev write read 8 blocks ...passed 00:32:25.793 Test: blockdev write read size > 128k ...passed 00:32:25.793 Test: blockdev write read invalid size ...passed 00:32:26.052 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:26.052 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:26.052 Test: blockdev write read max offset ...passed 00:32:26.052 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:26.052 Test: blockdev writev readv 8 blocks ...passed 00:32:26.052 Test: blockdev writev readv 30 x 1block ...passed 00:32:26.052 Test: blockdev writev readv block ...passed 00:32:26.052 Test: blockdev writev readv size > 128k ...passed 00:32:26.052 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:26.052 Test: blockdev comparev and writev ...[2024-12-08 06:36:16.093493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:26.052 [2024-12-08 06:36:16.093531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.052 [2024-12-08 06:36:16.093556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:26.052 [2024-12-08 06:36:16.093573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:26.052 [2024-12-08 06:36:16.094031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:26.052 [2024-12-08 06:36:16.094058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:26.052 [2024-12-08 06:36:16.094080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:26.052 [2024-12-08 06:36:16.094096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:26.052 [2024-12-08 06:36:16.094531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:26.052 [2024-12-08 06:36:16.094555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:26.052 [2024-12-08 06:36:16.094576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:26.052 [2024-12-08 06:36:16.094593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:26.052 [2024-12-08 06:36:16.095036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:26.052 [2024-12-08 06:36:16.095061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:26.052 [2024-12-08 06:36:16.095083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:26.052 [2024-12-08 06:36:16.095098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:26.052 passed 00:32:26.311 Test: blockdev nvme passthru rw ...passed 00:32:26.311 Test: blockdev nvme passthru vendor specific ...[2024-12-08 06:36:16.177068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:26.311 [2024-12-08 06:36:16.177106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:26.311 [2024-12-08 06:36:16.177267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:26.311 [2024-12-08 06:36:16.177290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:26.311 [2024-12-08 06:36:16.177430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:26.311 [2024-12-08 06:36:16.177452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:26.311 [2024-12-08 06:36:16.177595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:26.311 [2024-12-08 06:36:16.177617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:26.311 passed 00:32:26.311 Test: blockdev nvme admin passthru ...passed 00:32:26.311 Test: blockdev copy ...passed 00:32:26.311 00:32:26.311 Run Summary: Type Total Ran Passed Failed Inactive 00:32:26.311 suites 1 1 n/a 0 0 00:32:26.311 tests 23 23 23 0 0 00:32:26.311 asserts 152 152 152 0 n/a 00:32:26.311 00:32:26.311 Elapsed time = 1.195 seconds 00:32:26.311 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:26.311 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.311 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:26.311 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.311 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:26.311 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:26.311 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:26.311 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:26.311 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:26.311 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:26.311 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:26.311 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:26.613 rmmod nvme_tcp 00:32:26.613 rmmod nvme_fabrics 00:32:26.613 rmmod nvme_keyring 00:32:26.613 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:26.613 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:26.613 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:26.613 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1229352 ']' 00:32:26.613 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1229352 00:32:26.613 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1229352 ']' 00:32:26.613 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1229352 00:32:26.613 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:26.613 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:26.613 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1229352 00:32:26.613 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:26.613 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:26.613 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1229352' 00:32:26.613 killing process with pid 1229352 00:32:26.613 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1229352 00:32:26.613 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1229352 00:32:26.899 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:26.899 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:26.899 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:26.899 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:26.899 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:26.899 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:26.899 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:26.899 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:26.899 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:26.899 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.899 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.899 06:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:28.802 06:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:28.802 00:32:28.802 real 0m6.671s 00:32:28.802 user 0m9.547s 00:32:28.802 sys 0m2.660s 00:32:28.802 06:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:28.802 06:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:28.802 ************************************ 00:32:28.802 END TEST nvmf_bdevio 00:32:28.802 ************************************ 00:32:28.802 06:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:28.802 00:32:28.802 real 3m54.482s 00:32:28.802 user 8m52.963s 00:32:28.802 sys 1m25.813s 00:32:28.802 06:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:28.802 06:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:28.802 ************************************ 00:32:28.802 END TEST nvmf_target_core_interrupt_mode 00:32:28.802 ************************************ 00:32:28.802 06:36:18 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:28.802 06:36:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:28.802 06:36:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:28.802 06:36:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:28.802 ************************************ 00:32:28.802 START TEST nvmf_interrupt 00:32:28.802 ************************************ 00:32:28.802 06:36:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:28.802 * Looking for test storage... 00:32:28.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:28.802 06:36:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:28.802 06:36:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:32:28.802 06:36:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:29.060 06:36:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:29.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.060 --rc genhtml_branch_coverage=1 00:32:29.060 --rc genhtml_function_coverage=1 00:32:29.060 --rc genhtml_legend=1 00:32:29.060 --rc geninfo_all_blocks=1 00:32:29.060 --rc geninfo_unexecuted_blocks=1 00:32:29.060 00:32:29.060 ' 00:32:29.060 06:36:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:29.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.060 --rc genhtml_branch_coverage=1 00:32:29.061 --rc genhtml_function_coverage=1 00:32:29.061 --rc genhtml_legend=1 00:32:29.061 --rc geninfo_all_blocks=1 00:32:29.061 --rc geninfo_unexecuted_blocks=1 00:32:29.061 00:32:29.061 ' 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:29.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.061 --rc genhtml_branch_coverage=1 00:32:29.061 --rc genhtml_function_coverage=1 00:32:29.061 --rc genhtml_legend=1 00:32:29.061 --rc geninfo_all_blocks=1 00:32:29.061 --rc geninfo_unexecuted_blocks=1 00:32:29.061 00:32:29.061 ' 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:29.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.061 --rc genhtml_branch_coverage=1 00:32:29.061 --rc genhtml_function_coverage=1 00:32:29.061 --rc genhtml_legend=1 00:32:29.061 --rc geninfo_all_blocks=1 00:32:29.061 --rc geninfo_unexecuted_blocks=1 00:32:29.061 00:32:29.061 ' 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:29.061 06:36:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:30.962 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:30.962 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:30.962 Found net devices under 0000:84:00.0: cvl_0_0 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:30.962 Found net devices under 0000:84:00.1: cvl_0_1 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:30.962 06:36:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:30.962 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:30.962 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:30.962 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:30.962 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:31.219 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:31.219 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:31.219 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:31.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:31.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:32:31.220 00:32:31.220 --- 10.0.0.2 ping statistics --- 00:32:31.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.220 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:31.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:31.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:32:31.220 00:32:31.220 --- 10.0.0.1 ping statistics --- 00:32:31.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.220 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1231496 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1231496 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1231496 ']' 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:31.220 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.220 [2024-12-08 06:36:21.186413] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:31.220 [2024-12-08 06:36:21.187462] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:32:31.220 [2024-12-08 06:36:21.187520] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:31.220 [2024-12-08 06:36:21.259757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:31.220 [2024-12-08 06:36:21.319792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:31.220 [2024-12-08 06:36:21.319870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:31.220 [2024-12-08 06:36:21.319884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:31.220 [2024-12-08 06:36:21.319895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:31.220 [2024-12-08 06:36:21.319905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:31.220 [2024-12-08 06:36:21.321468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.220 [2024-12-08 06:36:21.321474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.480 [2024-12-08 06:36:21.413578] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:31.480 [2024-12-08 06:36:21.413583] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:31.480 [2024-12-08 06:36:21.413835] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:31.480 5000+0 records in 00:32:31.480 5000+0 records out 00:32:31.480 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0140129 s, 731 MB/s 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.480 AIO0 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.480 [2024-12-08 06:36:21.506120] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:31.480 [2024-12-08 06:36:21.534401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1231496 0 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1231496 0 idle 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1231496 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1231496 -w 256 00:32:31.480 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1231496 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.28 reactor_0' 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1231496 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.28 reactor_0 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1231496 1 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1231496 1 idle 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1231496 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1231496 -w 256 00:32:31.741 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1231502 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1231502 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1231652 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1231496 0 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1231496 0 busy 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1231496 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1231496 -w 256 00:32:32.002 06:36:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1231496 root 20 0 128.2g 48768 35328 R 99.9 0.1 0:00.47 reactor_0' 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1231496 root 20 0 128.2g 48768 35328 R 99.9 0.1 0:00.47 reactor_0 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1231496 1 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1231496 1 busy 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1231496 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1231496 -w 256 00:32:32.002 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:32.260 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1231502 root 20 0 128.2g 48768 35328 R 93.8 0.1 0:00.26 reactor_1' 00:32:32.260 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1231502 root 20 0 128.2g 48768 35328 R 93.8 0.1 0:00.26 reactor_1 00:32:32.260 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:32.260 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:32.260 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:32:32.260 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:32:32.260 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:32.260 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:32.260 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:32.260 06:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:32.260 06:36:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1231652 00:32:42.234 Initializing NVMe Controllers 00:32:42.234 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:42.234 Controller IO queue size 256, less than required. 00:32:42.234 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:42.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:42.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:42.234 Initialization complete. Launching workers. 00:32:42.234 ======================================================== 00:32:42.234 Latency(us) 00:32:42.234 Device Information : IOPS MiB/s Average min max 00:32:42.234 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 14479.50 56.56 17690.48 4130.12 21660.44 00:32:42.234 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 14331.00 55.98 17874.65 4302.30 26065.52 00:32:42.234 ======================================================== 00:32:42.234 Total : 28810.50 112.54 17782.09 4130.12 26065.52 00:32:42.234 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1231496 0 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1231496 0 idle 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1231496 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1231496 -w 256 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1231496 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:20.22 reactor_0' 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1231496 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:20.22 reactor_0 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:42.234 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1231496 1 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1231496 1 idle 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1231496 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1231496 -w 256 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1231502 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:09.97 reactor_1' 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1231502 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:09.97 reactor_1 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:42.235 06:36:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:42.495 06:36:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:42.495 06:36:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:42.495 06:36:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:42.495 06:36:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:42.495 06:36:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1231496 0 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1231496 0 idle 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1231496 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1231496 -w 256 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1231496 root 20 0 128.2g 61056 35328 S 13.3 0.1 0:20.33 reactor_0' 00:32:45.031 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1231496 root 20 0 128.2g 61056 35328 S 13.3 0.1 0:20.33 reactor_0 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=13.3 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=13 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1231496 1 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1231496 1 idle 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1231496 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1231496 -w 256 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1231502 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:10.01 reactor_1' 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1231502 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:10.01 reactor_1 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:45.032 06:36:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:45.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:45.032 06:36:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:45.032 06:36:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:45.032 06:36:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:45.032 06:36:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:45.032 06:36:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:45.032 06:36:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:45.032 06:36:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:45.032 06:36:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:45.032 06:36:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:45.032 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:45.032 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:45.032 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:45.032 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:45.032 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:45.032 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:45.032 rmmod nvme_tcp 00:32:45.032 rmmod nvme_fabrics 00:32:45.032 rmmod nvme_keyring 00:32:45.290 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:45.290 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:45.290 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:45.290 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1231496 ']' 00:32:45.290 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1231496 00:32:45.290 06:36:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1231496 ']' 00:32:45.290 06:36:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1231496 00:32:45.290 06:36:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:45.290 06:36:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:45.290 06:36:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1231496 00:32:45.290 06:36:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:45.290 06:36:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:45.290 06:36:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1231496' 00:32:45.290 killing process with pid 1231496 00:32:45.290 06:36:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1231496 00:32:45.290 06:36:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1231496 00:32:45.548 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:45.548 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:45.548 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:45.548 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:45.548 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:45.548 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:45.548 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:45.548 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:45.548 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:45.548 06:36:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.548 06:36:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:45.548 06:36:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.448 06:36:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:47.448 00:32:47.448 real 0m18.610s 00:32:47.448 user 0m37.068s 00:32:47.448 sys 0m6.659s 00:32:47.448 06:36:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:47.448 06:36:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:47.448 ************************************ 00:32:47.448 END TEST nvmf_interrupt 00:32:47.448 ************************************ 00:32:47.448 00:32:47.448 real 25m2.471s 00:32:47.448 user 58m25.464s 00:32:47.448 sys 6m57.439s 00:32:47.448 06:36:37 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:47.448 06:36:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.448 ************************************ 00:32:47.448 END TEST nvmf_tcp 00:32:47.448 ************************************ 00:32:47.448 06:36:37 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:47.448 06:36:37 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:47.448 06:36:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:47.448 06:36:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:47.448 06:36:37 -- common/autotest_common.sh@10 -- # set +x 00:32:47.448 ************************************ 00:32:47.448 START TEST spdkcli_nvmf_tcp 00:32:47.448 ************************************ 00:32:47.448 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:47.706 * Looking for test storage... 00:32:47.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:47.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.706 --rc genhtml_branch_coverage=1 00:32:47.706 --rc genhtml_function_coverage=1 00:32:47.706 --rc genhtml_legend=1 00:32:47.706 --rc geninfo_all_blocks=1 00:32:47.706 --rc geninfo_unexecuted_blocks=1 00:32:47.706 00:32:47.706 ' 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:47.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.706 --rc genhtml_branch_coverage=1 00:32:47.706 --rc genhtml_function_coverage=1 00:32:47.706 --rc genhtml_legend=1 00:32:47.706 --rc geninfo_all_blocks=1 00:32:47.706 --rc geninfo_unexecuted_blocks=1 00:32:47.706 00:32:47.706 ' 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:47.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.706 --rc genhtml_branch_coverage=1 00:32:47.706 --rc genhtml_function_coverage=1 00:32:47.706 --rc genhtml_legend=1 00:32:47.706 --rc geninfo_all_blocks=1 00:32:47.706 --rc geninfo_unexecuted_blocks=1 00:32:47.706 00:32:47.706 ' 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:47.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.706 --rc genhtml_branch_coverage=1 00:32:47.706 --rc genhtml_function_coverage=1 00:32:47.706 --rc genhtml_legend=1 00:32:47.706 --rc geninfo_all_blocks=1 00:32:47.706 --rc geninfo_unexecuted_blocks=1 00:32:47.706 00:32:47.706 ' 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:47.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1233655 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1233655 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1233655 ']' 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:47.706 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.706 [2024-12-08 06:36:37.733955] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:32:47.706 [2024-12-08 06:36:37.734056] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233655 ] 00:32:47.706 [2024-12-08 06:36:37.799102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:47.964 [2024-12-08 06:36:37.856676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.964 [2024-12-08 06:36:37.856680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.964 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:47.964 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:47.964 06:36:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:47.964 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:47.964 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.964 06:36:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:47.964 06:36:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:47.964 06:36:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:47.964 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:47.964 06:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.964 06:36:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:47.964 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:47.964 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:47.964 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:47.964 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:47.964 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:47.964 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:47.964 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:47.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:47.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:47.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:47.964 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:47.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:47.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:47.964 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:47.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:47.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:47.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:47.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:47.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:47.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:47.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:47.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:47.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:47.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:47.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:47.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:47.964 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:47.964 ' 00:32:50.495 [2024-12-08 06:36:40.591707] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.875 [2024-12-08 06:36:41.864188] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:54.412 [2024-12-08 06:36:44.211386] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:56.317 [2024-12-08 06:36:46.225395] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:57.697 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:57.697 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:57.697 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:57.697 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:57.697 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:57.697 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:57.697 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:57.697 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:57.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:57.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:57.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:57.697 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:57.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:57.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:57.697 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:57.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:57.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:57.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:57.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:57.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:57.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:57.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:57.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:57.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:57.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:57.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:57.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:57.697 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:57.955 06:36:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:57.955 06:36:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:57.955 06:36:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:57.955 06:36:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:57.955 06:36:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:57.955 06:36:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:57.955 06:36:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:57.955 06:36:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:58.214 06:36:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:58.475 06:36:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:58.475 06:36:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:58.475 06:36:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:58.475 06:36:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:58.475 06:36:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:58.475 06:36:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:58.475 06:36:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:58.475 06:36:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:58.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:58.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:58.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:58.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:58.475 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:58.475 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:58.475 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:58.475 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:58.475 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:58.475 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:58.475 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:58.475 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:58.475 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:58.475 ' 00:33:03.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:03.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:03.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:03.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:03.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:03.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:03.757 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:03.757 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:03.757 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:03.757 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:03.757 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:03.757 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:03.757 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:03.757 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:03.757 06:36:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:03.757 06:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:03.757 06:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:03.757 06:36:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1233655 00:33:03.757 06:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1233655 ']' 00:33:03.757 06:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1233655 00:33:03.757 06:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:03.757 06:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:03.757 06:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1233655 00:33:03.757 06:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:03.757 06:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:03.757 06:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1233655' 00:33:03.757 killing process with pid 1233655 00:33:03.757 06:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1233655 00:33:03.757 06:36:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1233655 00:33:04.014 06:36:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:04.014 06:36:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:04.014 06:36:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1233655 ']' 00:33:04.014 06:36:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1233655 00:33:04.014 06:36:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1233655 ']' 00:33:04.014 06:36:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1233655 00:33:04.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1233655) - No such process 00:33:04.014 06:36:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1233655 is not found' 00:33:04.014 Process with pid 1233655 is not found 00:33:04.014 06:36:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:04.014 06:36:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:04.014 06:36:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:04.014 00:33:04.014 real 0m16.487s 00:33:04.014 user 0m35.085s 00:33:04.014 sys 0m0.741s 00:33:04.014 06:36:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:04.014 06:36:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:04.014 ************************************ 00:33:04.014 END TEST spdkcli_nvmf_tcp 00:33:04.014 ************************************ 00:33:04.014 06:36:54 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:04.014 06:36:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:04.014 06:36:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:04.014 06:36:54 -- common/autotest_common.sh@10 -- # set +x 00:33:04.014 ************************************ 00:33:04.014 START TEST nvmf_identify_passthru 00:33:04.014 ************************************ 00:33:04.014 06:36:54 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:04.014 * Looking for test storage... 00:33:04.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:04.014 06:36:54 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:04.014 06:36:54 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:33:04.014 06:36:54 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:04.271 06:36:54 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:04.272 06:36:54 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:04.272 06:36:54 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:04.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.272 --rc genhtml_branch_coverage=1 00:33:04.272 --rc genhtml_function_coverage=1 00:33:04.272 --rc genhtml_legend=1 00:33:04.272 --rc geninfo_all_blocks=1 00:33:04.272 --rc geninfo_unexecuted_blocks=1 00:33:04.272 00:33:04.272 ' 00:33:04.272 06:36:54 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:04.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.272 --rc genhtml_branch_coverage=1 00:33:04.272 --rc genhtml_function_coverage=1 00:33:04.272 --rc genhtml_legend=1 00:33:04.272 --rc geninfo_all_blocks=1 00:33:04.272 --rc geninfo_unexecuted_blocks=1 00:33:04.272 00:33:04.272 ' 00:33:04.272 06:36:54 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:04.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.272 --rc genhtml_branch_coverage=1 00:33:04.272 --rc genhtml_function_coverage=1 00:33:04.272 --rc genhtml_legend=1 00:33:04.272 --rc geninfo_all_blocks=1 00:33:04.272 --rc geninfo_unexecuted_blocks=1 00:33:04.272 00:33:04.272 ' 00:33:04.272 06:36:54 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:04.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.272 --rc genhtml_branch_coverage=1 00:33:04.272 --rc genhtml_function_coverage=1 00:33:04.272 --rc genhtml_legend=1 00:33:04.272 --rc geninfo_all_blocks=1 00:33:04.272 --rc geninfo_unexecuted_blocks=1 00:33:04.272 00:33:04.272 ' 00:33:04.272 06:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:04.272 06:36:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.272 06:36:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.272 06:36:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.272 06:36:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:04.272 06:36:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:04.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:04.272 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:04.272 06:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:04.272 06:36:54 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:04.272 06:36:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.272 06:36:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.272 06:36:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.272 06:36:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:04.273 06:36:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.273 06:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:04.273 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:04.273 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:04.273 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:04.273 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:04.273 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:04.273 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.273 06:36:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:04.273 06:36:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.273 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:04.273 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:04.273 06:36:54 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:04.273 06:36:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:06.175 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:06.175 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:06.175 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:06.175 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:06.175 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:06.175 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:06.175 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:06.175 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:06.175 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:06.175 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:06.175 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:06.175 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:33:06.176 Found 0000:84:00.0 (0x8086 - 0x159b) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:33:06.176 Found 0000:84:00.1 (0x8086 - 0x159b) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:33:06.176 Found net devices under 0000:84:00.0: cvl_0_0 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:33:06.176 Found net devices under 0000:84:00.1: cvl_0_1 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:06.176 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:06.434 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:06.434 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:06.434 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:06.434 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:06.434 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:06.434 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:06.434 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:06.434 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:06.434 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:06.434 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:06.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:06.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:33:06.434 00:33:06.434 --- 10.0.0.2 ping statistics --- 00:33:06.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.434 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:33:06.434 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:06.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:06.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:33:06.434 00:33:06.434 --- 10.0.0.1 ping statistics --- 00:33:06.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.434 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:33:06.434 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:06.434 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:33:06.434 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:06.434 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:06.434 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:06.434 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:06.434 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:06.435 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:06.435 06:36:56 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:06.435 06:36:56 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:06.435 06:36:56 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:06.435 06:36:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:06.435 06:36:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:06.435 06:36:56 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:06.435 06:36:56 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:33:06.435 06:36:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:06.435 06:36:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:06.435 06:36:56 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:06.435 06:36:56 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:33:06.435 06:36:56 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:06.435 06:36:56 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:06.435 06:36:56 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:06.435 06:36:56 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:06.435 06:36:56 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:82:00.0 00:33:06.435 06:36:56 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:82:00.0 00:33:06.435 06:36:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:33:06.435 06:36:56 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:33:06.435 06:36:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:33:06.435 06:36:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:06.435 06:36:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:10.630 06:37:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:33:10.630 06:37:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:33:10.630 06:37:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:10.630 06:37:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:14.825 06:37:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:14.825 06:37:04 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:14.825 06:37:04 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:14.825 06:37:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:14.825 06:37:04 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:14.826 06:37:04 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:14.826 06:37:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:14.826 06:37:04 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1238194 00:33:14.826 06:37:04 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:14.826 06:37:04 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:14.826 06:37:04 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1238194 00:33:14.826 06:37:04 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1238194 ']' 00:33:14.826 06:37:04 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.826 06:37:04 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:14.826 06:37:04 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.826 06:37:04 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:14.826 06:37:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:15.136 [2024-12-08 06:37:04.974788] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:33:15.136 [2024-12-08 06:37:04.974883] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:15.136 [2024-12-08 06:37:05.051617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:15.136 [2024-12-08 06:37:05.109653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:15.136 [2024-12-08 06:37:05.109731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:15.136 [2024-12-08 06:37:05.109761] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:15.136 [2024-12-08 06:37:05.109773] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:15.136 [2024-12-08 06:37:05.109784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:15.136 [2024-12-08 06:37:05.111400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.136 [2024-12-08 06:37:05.111424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:15.136 [2024-12-08 06:37:05.111484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:15.136 [2024-12-08 06:37:05.111487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.136 06:37:05 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:15.136 06:37:05 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:15.136 06:37:05 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:15.136 06:37:05 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.136 06:37:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:15.136 INFO: Log level set to 20 00:33:15.136 INFO: Requests: 00:33:15.136 { 00:33:15.136 "jsonrpc": "2.0", 00:33:15.136 "method": "nvmf_set_config", 00:33:15.136 "id": 1, 00:33:15.136 "params": { 00:33:15.136 "admin_cmd_passthru": { 00:33:15.136 "identify_ctrlr": true 00:33:15.136 } 00:33:15.136 } 00:33:15.136 } 00:33:15.136 00:33:15.136 INFO: response: 00:33:15.136 { 00:33:15.136 "jsonrpc": "2.0", 00:33:15.136 "id": 1, 00:33:15.136 "result": true 00:33:15.136 } 00:33:15.136 00:33:15.136 06:37:05 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.136 06:37:05 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:15.136 06:37:05 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.136 06:37:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:15.136 INFO: Setting log level to 20 00:33:15.136 INFO: Setting log level to 20 00:33:15.136 INFO: Log level set to 20 00:33:15.136 INFO: Log level set to 20 00:33:15.136 INFO: Requests: 00:33:15.136 { 00:33:15.136 "jsonrpc": "2.0", 00:33:15.136 "method": "framework_start_init", 00:33:15.136 "id": 1 00:33:15.136 } 00:33:15.136 00:33:15.136 INFO: Requests: 00:33:15.136 { 00:33:15.136 "jsonrpc": "2.0", 00:33:15.136 "method": "framework_start_init", 00:33:15.136 "id": 1 00:33:15.136 } 00:33:15.136 00:33:15.410 [2024-12-08 06:37:05.297615] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:15.410 INFO: response: 00:33:15.410 { 00:33:15.410 "jsonrpc": "2.0", 00:33:15.410 "id": 1, 00:33:15.410 "result": true 00:33:15.410 } 00:33:15.410 00:33:15.410 INFO: response: 00:33:15.410 { 00:33:15.410 "jsonrpc": "2.0", 00:33:15.410 "id": 1, 00:33:15.410 "result": true 00:33:15.410 } 00:33:15.410 00:33:15.410 06:37:05 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.410 06:37:05 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:15.410 06:37:05 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.410 06:37:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:15.410 INFO: Setting log level to 40 00:33:15.410 INFO: Setting log level to 40 00:33:15.410 INFO: Setting log level to 40 00:33:15.410 [2024-12-08 06:37:05.307637] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:15.410 06:37:05 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.410 06:37:05 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:15.410 06:37:05 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:15.410 06:37:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:15.410 06:37:05 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:33:15.410 06:37:05 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.410 06:37:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:18.696 Nvme0n1 00:33:18.696 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.696 06:37:08 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:18.696 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.696 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:18.696 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.696 06:37:08 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:18.696 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.696 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:18.696 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.696 06:37:08 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:18.696 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.696 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:18.696 [2024-12-08 06:37:08.209259] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.696 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.696 06:37:08 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:18.696 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.696 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:18.696 [ 00:33:18.696 { 00:33:18.696 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:18.696 "subtype": "Discovery", 00:33:18.696 "listen_addresses": [], 00:33:18.696 "allow_any_host": true, 00:33:18.696 "hosts": [] 00:33:18.696 }, 00:33:18.696 { 00:33:18.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:18.696 "subtype": "NVMe", 00:33:18.696 "listen_addresses": [ 00:33:18.696 { 00:33:18.696 "trtype": "TCP", 00:33:18.696 "adrfam": "IPv4", 00:33:18.696 "traddr": "10.0.0.2", 00:33:18.696 "trsvcid": "4420" 00:33:18.696 } 00:33:18.696 ], 00:33:18.696 "allow_any_host": true, 00:33:18.696 "hosts": [], 00:33:18.696 "serial_number": "SPDK00000000000001", 00:33:18.696 "model_number": "SPDK bdev Controller", 00:33:18.696 "max_namespaces": 1, 00:33:18.696 "min_cntlid": 1, 00:33:18.696 "max_cntlid": 65519, 00:33:18.696 "namespaces": [ 00:33:18.696 { 00:33:18.697 "nsid": 1, 00:33:18.697 "bdev_name": "Nvme0n1", 00:33:18.697 "name": "Nvme0n1", 00:33:18.697 "nguid": "FAE765EC39EF4EB9A305D426605BEDBB", 00:33:18.697 "uuid": "fae765ec-39ef-4eb9-a305-d426605bedbb" 00:33:18.697 } 00:33:18.697 ] 00:33:18.697 } 00:33:18.697 ] 00:33:18.697 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.697 06:37:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:18.697 06:37:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:18.697 06:37:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:18.697 06:37:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:33:18.697 06:37:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:18.697 06:37:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:18.697 06:37:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:18.697 06:37:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:18.697 06:37:08 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:33:18.697 06:37:08 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:18.697 06:37:08 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:18.697 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.697 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:18.697 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.697 06:37:08 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:18.697 06:37:08 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:18.697 06:37:08 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:18.697 06:37:08 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:18.697 06:37:08 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:18.697 06:37:08 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:18.697 06:37:08 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:18.697 06:37:08 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:18.697 rmmod nvme_tcp 00:33:18.697 rmmod nvme_fabrics 00:33:18.697 rmmod nvme_keyring 00:33:18.956 06:37:08 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:18.956 06:37:08 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:18.956 06:37:08 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:18.956 06:37:08 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1238194 ']' 00:33:18.956 06:37:08 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1238194 00:33:18.956 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1238194 ']' 00:33:18.956 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1238194 00:33:18.956 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:18.956 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:18.956 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1238194 00:33:18.956 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:18.956 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:18.956 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1238194' 00:33:18.956 killing process with pid 1238194 00:33:18.956 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1238194 00:33:18.956 06:37:08 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1238194 00:33:20.863 06:37:10 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:20.863 06:37:10 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:20.863 06:37:10 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:20.863 06:37:10 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:20.863 06:37:10 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:20.863 06:37:10 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:20.863 06:37:10 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:20.863 06:37:10 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:20.863 06:37:10 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:20.863 06:37:10 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.863 06:37:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:20.863 06:37:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.765 06:37:12 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.765 00:33:22.765 real 0m18.486s 00:33:22.765 user 0m27.336s 00:33:22.765 sys 0m3.194s 00:33:22.765 06:37:12 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.765 06:37:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:22.765 ************************************ 00:33:22.765 END TEST nvmf_identify_passthru 00:33:22.765 ************************************ 00:33:22.765 06:37:12 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:22.765 06:37:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:22.765 06:37:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:22.765 06:37:12 -- common/autotest_common.sh@10 -- # set +x 00:33:22.765 ************************************ 00:33:22.765 START TEST nvmf_dif 00:33:22.765 ************************************ 00:33:22.765 06:37:12 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:22.765 * Looking for test storage... 00:33:22.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:22.765 06:37:12 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:22.765 06:37:12 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:33:22.765 06:37:12 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:22.766 06:37:12 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:22.766 06:37:12 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:22.766 06:37:12 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:22.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.766 --rc genhtml_branch_coverage=1 00:33:22.766 --rc genhtml_function_coverage=1 00:33:22.766 --rc genhtml_legend=1 00:33:22.766 --rc geninfo_all_blocks=1 00:33:22.766 --rc geninfo_unexecuted_blocks=1 00:33:22.766 00:33:22.766 ' 00:33:22.766 06:37:12 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:22.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.766 --rc genhtml_branch_coverage=1 00:33:22.766 --rc genhtml_function_coverage=1 00:33:22.766 --rc genhtml_legend=1 00:33:22.766 --rc geninfo_all_blocks=1 00:33:22.766 --rc geninfo_unexecuted_blocks=1 00:33:22.766 00:33:22.766 ' 00:33:22.766 06:37:12 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:22.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.766 --rc genhtml_branch_coverage=1 00:33:22.766 --rc genhtml_function_coverage=1 00:33:22.766 --rc genhtml_legend=1 00:33:22.766 --rc geninfo_all_blocks=1 00:33:22.766 --rc geninfo_unexecuted_blocks=1 00:33:22.766 00:33:22.766 ' 00:33:22.766 06:37:12 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:22.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.766 --rc genhtml_branch_coverage=1 00:33:22.766 --rc genhtml_function_coverage=1 00:33:22.766 --rc genhtml_legend=1 00:33:22.766 --rc geninfo_all_blocks=1 00:33:22.766 --rc geninfo_unexecuted_blocks=1 00:33:22.766 00:33:22.766 ' 00:33:22.766 06:37:12 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.766 06:37:12 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.766 06:37:12 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.766 06:37:12 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.766 06:37:12 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.766 06:37:12 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:22.766 06:37:12 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:22.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:22.766 06:37:12 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:22.766 06:37:12 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:22.766 06:37:12 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:22.766 06:37:12 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:22.766 06:37:12 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.766 06:37:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:22.766 06:37:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:22.766 06:37:12 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:22.766 06:37:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:33:25.296 Found 0000:84:00.0 (0x8086 - 0x159b) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:33:25.296 Found 0000:84:00.1 (0x8086 - 0x159b) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:33:25.296 Found net devices under 0000:84:00.0: cvl_0_0 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:33:25.296 Found net devices under 0000:84:00.1: cvl_0_1 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:25.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:25.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:33:25.296 00:33:25.296 --- 10.0.0.2 ping statistics --- 00:33:25.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.296 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:25.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:25.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:33:25.296 00:33:25.296 --- 10.0.0.1 ping statistics --- 00:33:25.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.296 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:25.296 06:37:14 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:26.228 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:26.228 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:26.228 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:26.228 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:26.228 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:26.228 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:26.228 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:26.228 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:26.228 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:26.228 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:26.228 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:26.228 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:26.228 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:26.228 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:26.228 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:26.228 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:26.228 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:26.228 06:37:16 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:26.228 06:37:16 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:26.228 06:37:16 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:26.228 06:37:16 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:26.228 06:37:16 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:26.228 06:37:16 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:26.228 06:37:16 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:26.228 06:37:16 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:26.228 06:37:16 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:26.228 06:37:16 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:26.228 06:37:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:26.228 06:37:16 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1241486 00:33:26.228 06:37:16 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:26.228 06:37:16 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1241486 00:33:26.228 06:37:16 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1241486 ']' 00:33:26.228 06:37:16 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.228 06:37:16 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.228 06:37:16 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.228 06:37:16 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.228 06:37:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:26.228 [2024-12-08 06:37:16.315007] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:33:26.228 [2024-12-08 06:37:16.315088] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:26.487 [2024-12-08 06:37:16.389262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.487 [2024-12-08 06:37:16.444906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:26.487 [2024-12-08 06:37:16.444978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:26.487 [2024-12-08 06:37:16.444991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:26.487 [2024-12-08 06:37:16.445036] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:26.487 [2024-12-08 06:37:16.445048] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:26.487 [2024-12-08 06:37:16.445693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.487 06:37:16 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.487 06:37:16 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:26.487 06:37:16 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:26.487 06:37:16 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:26.487 06:37:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:26.487 06:37:16 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:26.487 06:37:16 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:26.487 06:37:16 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:26.487 06:37:16 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.487 06:37:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:26.487 [2024-12-08 06:37:16.586493] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:26.487 06:37:16 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.487 06:37:16 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:26.487 06:37:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:26.487 06:37:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:26.487 06:37:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:26.746 ************************************ 00:33:26.746 START TEST fio_dif_1_default 00:33:26.746 ************************************ 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:26.746 bdev_null0 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:26.746 [2024-12-08 06:37:16.642840] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:26.746 { 00:33:26.746 "params": { 00:33:26.746 "name": "Nvme$subsystem", 00:33:26.746 "trtype": "$TEST_TRANSPORT", 00:33:26.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:26.746 "adrfam": "ipv4", 00:33:26.746 "trsvcid": "$NVMF_PORT", 00:33:26.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:26.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:26.746 "hdgst": ${hdgst:-false}, 00:33:26.746 "ddgst": ${ddgst:-false} 00:33:26.746 }, 00:33:26.746 "method": "bdev_nvme_attach_controller" 00:33:26.746 } 00:33:26.746 EOF 00:33:26.746 )") 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:26.746 "params": { 00:33:26.746 "name": "Nvme0", 00:33:26.746 "trtype": "tcp", 00:33:26.746 "traddr": "10.0.0.2", 00:33:26.746 "adrfam": "ipv4", 00:33:26.746 "trsvcid": "4420", 00:33:26.746 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:26.746 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:26.746 "hdgst": false, 00:33:26.746 "ddgst": false 00:33:26.746 }, 00:33:26.746 "method": "bdev_nvme_attach_controller" 00:33:26.746 }' 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:26.746 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:27.007 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:27.007 fio-3.35 00:33:27.007 Starting 1 thread 00:33:39.210 00:33:39.210 filename0: (groupid=0, jobs=1): err= 0: pid=1241715: Sun Dec 8 06:37:27 2024 00:33:39.210 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10025msec) 00:33:39.210 slat (nsec): min=7089, max=61995, avg=9144.60, stdev=3294.22 00:33:39.210 clat (usec): min=40881, max=45709, avg=41740.74, stdev=500.91 00:33:39.210 lat (usec): min=40888, max=45748, avg=41749.88, stdev=501.05 00:33:39.210 clat percentiles (usec): 00:33:39.210 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:39.210 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:33:39.210 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:39.210 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:33:39.210 | 99.99th=[45876] 00:33:39.210 bw ( KiB/s): min= 352, max= 384, per=99.73%, avg=382.35, stdev= 7.15, samples=20 00:33:39.210 iops : min= 88, max= 96, avg=95.55, stdev= 1.79, samples=20 00:33:39.210 lat (msec) : 50=100.00% 00:33:39.210 cpu : usr=90.94%, sys=8.76%, ctx=19, majf=0, minf=9 00:33:39.210 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.210 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.210 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:39.210 00:33:39.210 Run status group 0 (all jobs): 00:33:39.210 READ: bw=383KiB/s (392kB/s), 383KiB/s-383KiB/s (392kB/s-392kB/s), io=3840KiB (3932kB), run=10025-10025msec 00:33:39.210 06:37:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:39.210 06:37:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:39.210 06:37:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:39.210 06:37:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:39.210 06:37:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:39.210 06:37:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:39.210 06:37:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.210 06:37:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:39.210 06:37:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.210 06:37:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:39.210 06:37:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.210 06:37:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:39.210 06:37:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.210 00:33:39.210 real 0m11.220s 00:33:39.210 user 0m10.482s 00:33:39.210 sys 0m1.158s 00:33:39.210 06:37:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:39.210 06:37:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:39.210 ************************************ 00:33:39.210 END TEST fio_dif_1_default 00:33:39.210 ************************************ 00:33:39.210 06:37:27 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:39.211 06:37:27 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:39.211 06:37:27 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:39.211 06:37:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:39.211 ************************************ 00:33:39.211 START TEST fio_dif_1_multi_subsystems 00:33:39.211 ************************************ 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.211 bdev_null0 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.211 [2024-12-08 06:37:27.917979] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.211 bdev_null1 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:39.211 { 00:33:39.211 "params": { 00:33:39.211 "name": "Nvme$subsystem", 00:33:39.211 "trtype": "$TEST_TRANSPORT", 00:33:39.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.211 "adrfam": "ipv4", 00:33:39.211 "trsvcid": "$NVMF_PORT", 00:33:39.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.211 "hdgst": ${hdgst:-false}, 00:33:39.211 "ddgst": ${ddgst:-false} 00:33:39.211 }, 00:33:39.211 "method": "bdev_nvme_attach_controller" 00:33:39.211 } 00:33:39.211 EOF 00:33:39.211 )") 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:39.211 { 00:33:39.211 "params": { 00:33:39.211 "name": "Nvme$subsystem", 00:33:39.211 "trtype": "$TEST_TRANSPORT", 00:33:39.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.211 "adrfam": "ipv4", 00:33:39.211 "trsvcid": "$NVMF_PORT", 00:33:39.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.211 "hdgst": ${hdgst:-false}, 00:33:39.211 "ddgst": ${ddgst:-false} 00:33:39.211 }, 00:33:39.211 "method": "bdev_nvme_attach_controller" 00:33:39.211 } 00:33:39.211 EOF 00:33:39.211 )") 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:39.211 "params": { 00:33:39.211 "name": "Nvme0", 00:33:39.211 "trtype": "tcp", 00:33:39.211 "traddr": "10.0.0.2", 00:33:39.211 "adrfam": "ipv4", 00:33:39.211 "trsvcid": "4420", 00:33:39.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:39.211 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:39.211 "hdgst": false, 00:33:39.211 "ddgst": false 00:33:39.211 }, 00:33:39.211 "method": "bdev_nvme_attach_controller" 00:33:39.211 },{ 00:33:39.211 "params": { 00:33:39.211 "name": "Nvme1", 00:33:39.211 "trtype": "tcp", 00:33:39.211 "traddr": "10.0.0.2", 00:33:39.211 "adrfam": "ipv4", 00:33:39.211 "trsvcid": "4420", 00:33:39.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:39.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:39.211 "hdgst": false, 00:33:39.211 "ddgst": false 00:33:39.211 }, 00:33:39.211 "method": "bdev_nvme_attach_controller" 00:33:39.211 }' 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:39.211 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:39.212 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:39.212 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:39.212 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:39.212 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:39.212 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:39.212 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.212 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:39.212 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:39.212 fio-3.35 00:33:39.212 Starting 2 threads 00:33:49.178 00:33:49.178 filename0: (groupid=0, jobs=1): err= 0: pid=1243236: Sun Dec 8 06:37:39 2024 00:33:49.178 read: IOPS=173, BW=695KiB/s (712kB/s)(6960KiB/10015msec) 00:33:49.178 slat (nsec): min=7647, max=31024, avg=9987.04, stdev=3062.02 00:33:49.178 clat (usec): min=494, max=42450, avg=22990.93, stdev=20331.95 00:33:49.178 lat (usec): min=502, max=42463, avg=23000.92, stdev=20331.63 00:33:49.178 clat percentiles (usec): 00:33:49.178 | 1.00th=[ 523], 5.00th=[ 562], 10.00th=[ 578], 20.00th=[ 594], 00:33:49.178 | 30.00th=[ 627], 40.00th=[ 799], 50.00th=[41157], 60.00th=[41157], 00:33:49.178 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:33:49.178 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:49.179 | 99.99th=[42206] 00:33:49.179 bw ( KiB/s): min= 576, max= 768, per=64.45%, avg=694.40, stdev=58.82, samples=20 00:33:49.179 iops : min= 144, max= 192, avg=173.60, stdev=14.71, samples=20 00:33:49.179 lat (usec) : 500=0.17%, 750=37.59%, 1000=7.07% 00:33:49.179 lat (msec) : 2=0.23%, 4=0.23%, 50=54.71% 00:33:49.179 cpu : usr=94.17%, sys=5.27%, ctx=37, majf=0, minf=0 00:33:49.179 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.179 issued rwts: total=1740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.179 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:49.179 filename1: (groupid=0, jobs=1): err= 0: pid=1243237: Sun Dec 8 06:37:39 2024 00:33:49.179 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10015msec) 00:33:49.179 slat (nsec): min=7423, max=31299, avg=9865.21, stdev=3020.02 00:33:49.179 clat (usec): min=40758, max=43437, avg=41871.19, stdev=324.07 00:33:49.179 lat (usec): min=40766, max=43462, avg=41881.06, stdev=324.23 00:33:49.179 clat percentiles (usec): 00:33:49.179 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:33:49.179 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:33:49.179 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:49.179 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:33:49.179 | 99.99th=[43254] 00:33:49.179 bw ( KiB/s): min= 352, max= 384, per=35.29%, avg=380.80, stdev= 9.85, samples=20 00:33:49.179 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:33:49.179 lat (msec) : 50=100.00% 00:33:49.179 cpu : usr=94.67%, sys=5.00%, ctx=39, majf=0, minf=9 00:33:49.179 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.179 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.179 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:49.179 00:33:49.179 Run status group 0 (all jobs): 00:33:49.179 READ: bw=1077KiB/s (1103kB/s), 382KiB/s-695KiB/s (391kB/s-712kB/s), io=10.5MiB (11.0MB), run=10015-10015msec 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.438 00:33:49.438 real 0m11.528s 00:33:49.438 user 0m20.334s 00:33:49.438 sys 0m1.338s 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:49.438 06:37:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:49.438 ************************************ 00:33:49.438 END TEST fio_dif_1_multi_subsystems 00:33:49.438 ************************************ 00:33:49.438 06:37:39 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:49.438 06:37:39 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:49.438 06:37:39 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:49.438 06:37:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.438 ************************************ 00:33:49.438 START TEST fio_dif_rand_params 00:33:49.438 ************************************ 00:33:49.438 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:49.438 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:49.438 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:49.438 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:49.438 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:49.438 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:49.438 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:49.438 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:49.438 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:49.438 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:49.438 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:49.438 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:49.438 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:49.438 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:49.438 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.438 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.438 bdev_null0 00:33:49.438 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.438 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:49.439 [2024-12-08 06:37:39.502674] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:49.439 { 00:33:49.439 "params": { 00:33:49.439 "name": "Nvme$subsystem", 00:33:49.439 "trtype": "$TEST_TRANSPORT", 00:33:49.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:49.439 "adrfam": "ipv4", 00:33:49.439 "trsvcid": "$NVMF_PORT", 00:33:49.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:49.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:49.439 "hdgst": ${hdgst:-false}, 00:33:49.439 "ddgst": ${ddgst:-false} 00:33:49.439 }, 00:33:49.439 "method": "bdev_nvme_attach_controller" 00:33:49.439 } 00:33:49.439 EOF 00:33:49.439 )") 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:49.439 "params": { 00:33:49.439 "name": "Nvme0", 00:33:49.439 "trtype": "tcp", 00:33:49.439 "traddr": "10.0.0.2", 00:33:49.439 "adrfam": "ipv4", 00:33:49.439 "trsvcid": "4420", 00:33:49.439 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:49.439 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:49.439 "hdgst": false, 00:33:49.439 "ddgst": false 00:33:49.439 }, 00:33:49.439 "method": "bdev_nvme_attach_controller" 00:33:49.439 }' 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:49.439 06:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.698 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:49.698 ... 00:33:49.698 fio-3.35 00:33:49.698 Starting 3 threads 00:33:56.259 00:33:56.259 filename0: (groupid=0, jobs=1): err= 0: pid=1244636: Sun Dec 8 06:37:45 2024 00:33:56.259 read: IOPS=240, BW=30.1MiB/s (31.6MB/s)(152MiB/5048msec) 00:33:56.259 slat (nsec): min=4691, max=50208, avg=17160.91, stdev=4354.86 00:33:56.259 clat (usec): min=5761, max=56106, avg=12397.50, stdev=6776.76 00:33:56.259 lat (usec): min=5774, max=56130, avg=12414.66, stdev=6776.40 00:33:56.259 clat percentiles (usec): 00:33:56.260 | 1.00th=[ 7111], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[ 9765], 00:33:56.260 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:33:56.260 | 70.00th=[12649], 80.00th=[13304], 90.00th=[14222], 95.00th=[14877], 00:33:56.260 | 99.00th=[51119], 99.50th=[52167], 99.90th=[56361], 99.95th=[56361], 00:33:56.260 | 99.99th=[56361] 00:33:56.260 bw ( KiB/s): min=18432, max=35584, per=33.54%, avg=31052.80, stdev=5094.41, samples=10 00:33:56.260 iops : min= 144, max= 278, avg=242.60, stdev=39.80, samples=10 00:33:56.260 lat (msec) : 10=23.19%, 20=73.93%, 50=1.56%, 100=1.32% 00:33:56.260 cpu : usr=94.12%, sys=5.31%, ctx=31, majf=0, minf=58 00:33:56.260 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.260 issued rwts: total=1216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.260 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:56.260 filename0: (groupid=0, jobs=1): err= 0: pid=1244637: Sun Dec 8 06:37:45 2024 00:33:56.260 read: IOPS=241, BW=30.2MiB/s (31.6MB/s)(152MiB/5046msec) 00:33:56.260 slat (nsec): min=8045, max=37515, avg=16190.53, stdev=4162.17 00:33:56.260 clat (usec): min=4949, max=54574, avg=12371.28, stdev=7608.42 00:33:56.260 lat (usec): min=4958, max=54589, avg=12387.47, stdev=7607.83 00:33:56.260 clat percentiles (usec): 00:33:56.260 | 1.00th=[ 6652], 5.00th=[ 7701], 10.00th=[ 8455], 20.00th=[ 9765], 00:33:56.260 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11076], 60.00th=[11469], 00:33:56.260 | 70.00th=[11863], 80.00th=[12387], 90.00th=[13435], 95.00th=[14615], 00:33:56.260 | 99.00th=[52691], 99.50th=[53740], 99.90th=[53740], 99.95th=[54789], 00:33:56.260 | 99.99th=[54789] 00:33:56.260 bw ( KiB/s): min=17699, max=38400, per=33.60%, avg=31107.50, stdev=5889.95, samples=10 00:33:56.260 iops : min= 138, max= 300, avg=243.00, stdev=46.08, samples=10 00:33:56.260 lat (msec) : 10=22.99%, 20=73.40%, 50=1.64%, 100=1.97% 00:33:56.260 cpu : usr=95.10%, sys=4.40%, ctx=8, majf=0, minf=20 00:33:56.260 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.260 issued rwts: total=1218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.260 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:56.260 filename0: (groupid=0, jobs=1): err= 0: pid=1244638: Sun Dec 8 06:37:45 2024 00:33:56.260 read: IOPS=241, BW=30.1MiB/s (31.6MB/s)(152MiB/5047msec) 00:33:56.260 slat (nsec): min=8052, max=51466, avg=16693.06, stdev=4128.51 00:33:56.260 clat (usec): min=4093, max=54799, avg=12383.05, stdev=6018.56 00:33:56.260 lat (usec): min=4106, max=54815, avg=12399.74, stdev=6018.74 00:33:56.260 clat percentiles (usec): 00:33:56.260 | 1.00th=[ 4948], 5.00th=[ 7439], 10.00th=[ 8029], 20.00th=[ 9503], 00:33:56.260 | 30.00th=[10814], 40.00th=[11469], 50.00th=[11994], 60.00th=[12518], 00:33:56.260 | 70.00th=[13042], 80.00th=[13698], 90.00th=[14615], 95.00th=[15401], 00:33:56.260 | 99.00th=[51119], 99.50th=[52691], 99.90th=[54264], 99.95th=[54789], 00:33:56.260 | 99.99th=[54789] 00:33:56.260 bw ( KiB/s): min=24320, max=38144, per=33.58%, avg=31085.60, stdev=4614.36, samples=10 00:33:56.260 iops : min= 190, max= 298, avg=242.80, stdev=35.98, samples=10 00:33:56.260 lat (msec) : 10=23.09%, 20=74.77%, 50=0.82%, 100=1.31% 00:33:56.260 cpu : usr=92.79%, sys=5.85%, ctx=97, majf=0, minf=89 00:33:56.260 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.260 issued rwts: total=1217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.260 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:56.260 00:33:56.260 Run status group 0 (all jobs): 00:33:56.260 READ: bw=90.4MiB/s (94.8MB/s), 30.1MiB/s-30.2MiB/s (31.6MB/s-31.6MB/s), io=456MiB (479MB), run=5046-5048msec 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.260 bdev_null0 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.260 [2024-12-08 06:37:45.747446] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.260 bdev_null1 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.260 bdev_null2 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.260 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:56.261 { 00:33:56.261 "params": { 00:33:56.261 "name": "Nvme$subsystem", 00:33:56.261 "trtype": "$TEST_TRANSPORT", 00:33:56.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:56.261 "adrfam": "ipv4", 00:33:56.261 "trsvcid": "$NVMF_PORT", 00:33:56.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:56.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:56.261 "hdgst": ${hdgst:-false}, 00:33:56.261 "ddgst": ${ddgst:-false} 00:33:56.261 }, 00:33:56.261 "method": "bdev_nvme_attach_controller" 00:33:56.261 } 00:33:56.261 EOF 00:33:56.261 )") 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:56.261 { 00:33:56.261 "params": { 00:33:56.261 "name": "Nvme$subsystem", 00:33:56.261 "trtype": "$TEST_TRANSPORT", 00:33:56.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:56.261 "adrfam": "ipv4", 00:33:56.261 "trsvcid": "$NVMF_PORT", 00:33:56.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:56.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:56.261 "hdgst": ${hdgst:-false}, 00:33:56.261 "ddgst": ${ddgst:-false} 00:33:56.261 }, 00:33:56.261 "method": "bdev_nvme_attach_controller" 00:33:56.261 } 00:33:56.261 EOF 00:33:56.261 )") 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:56.261 { 00:33:56.261 "params": { 00:33:56.261 "name": "Nvme$subsystem", 00:33:56.261 "trtype": "$TEST_TRANSPORT", 00:33:56.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:56.261 "adrfam": "ipv4", 00:33:56.261 "trsvcid": "$NVMF_PORT", 00:33:56.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:56.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:56.261 "hdgst": ${hdgst:-false}, 00:33:56.261 "ddgst": ${ddgst:-false} 00:33:56.261 }, 00:33:56.261 "method": "bdev_nvme_attach_controller" 00:33:56.261 } 00:33:56.261 EOF 00:33:56.261 )") 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:56.261 "params": { 00:33:56.261 "name": "Nvme0", 00:33:56.261 "trtype": "tcp", 00:33:56.261 "traddr": "10.0.0.2", 00:33:56.261 "adrfam": "ipv4", 00:33:56.261 "trsvcid": "4420", 00:33:56.261 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:56.261 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:56.261 "hdgst": false, 00:33:56.261 "ddgst": false 00:33:56.261 }, 00:33:56.261 "method": "bdev_nvme_attach_controller" 00:33:56.261 },{ 00:33:56.261 "params": { 00:33:56.261 "name": "Nvme1", 00:33:56.261 "trtype": "tcp", 00:33:56.261 "traddr": "10.0.0.2", 00:33:56.261 "adrfam": "ipv4", 00:33:56.261 "trsvcid": "4420", 00:33:56.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:56.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:56.261 "hdgst": false, 00:33:56.261 "ddgst": false 00:33:56.261 }, 00:33:56.261 "method": "bdev_nvme_attach_controller" 00:33:56.261 },{ 00:33:56.261 "params": { 00:33:56.261 "name": "Nvme2", 00:33:56.261 "trtype": "tcp", 00:33:56.261 "traddr": "10.0.0.2", 00:33:56.261 "adrfam": "ipv4", 00:33:56.261 "trsvcid": "4420", 00:33:56.261 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:56.261 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:56.261 "hdgst": false, 00:33:56.261 "ddgst": false 00:33:56.261 }, 00:33:56.261 "method": "bdev_nvme_attach_controller" 00:33:56.261 }' 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:56.261 06:37:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:56.261 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:56.261 ... 00:33:56.261 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:56.261 ... 00:33:56.261 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:56.261 ... 00:33:56.261 fio-3.35 00:33:56.261 Starting 24 threads 00:34:08.465 00:34:08.465 filename0: (groupid=0, jobs=1): err= 0: pid=1245495: Sun Dec 8 06:37:57 2024 00:34:08.465 read: IOPS=463, BW=1856KiB/s (1900kB/s)(18.1MiB/10001msec) 00:34:08.465 slat (nsec): min=6011, max=91178, avg=35157.45, stdev=13247.80 00:34:08.465 clat (usec): min=31158, max=45210, avg=34196.40, stdev=1213.44 00:34:08.465 lat (usec): min=31203, max=45228, avg=34231.56, stdev=1210.03 00:34:08.465 clat percentiles (usec): 00:34:08.465 | 1.00th=[31851], 5.00th=[33162], 10.00th=[33817], 20.00th=[33817], 00:34:08.465 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:08.465 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.465 | 99.00th=[39584], 99.50th=[44303], 99.90th=[45351], 99.95th=[45351], 00:34:08.465 | 99.99th=[45351] 00:34:08.465 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1852.79, stdev=65.51, samples=19 00:34:08.465 iops : min= 448, max= 480, avg=463.16, stdev=16.42, samples=19 00:34:08.465 lat (msec) : 50=100.00% 00:34:08.465 cpu : usr=97.94%, sys=1.35%, ctx=54, majf=0, minf=39 00:34:08.465 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.465 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.465 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.465 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.465 filename0: (groupid=0, jobs=1): err= 0: pid=1245496: Sun Dec 8 06:37:57 2024 00:34:08.465 read: IOPS=463, BW=1856KiB/s (1900kB/s)(18.1MiB/10001msec) 00:34:08.465 slat (nsec): min=9629, max=90829, avg=36383.22, stdev=10349.17 00:34:08.465 clat (usec): min=20130, max=56474, avg=34158.03, stdev=1832.48 00:34:08.465 lat (usec): min=20161, max=56490, avg=34194.41, stdev=1832.39 00:34:08.465 clat percentiles (usec): 00:34:08.465 | 1.00th=[32113], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:34:08.465 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:08.465 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.465 | 99.00th=[39584], 99.50th=[44303], 99.90th=[56361], 99.95th=[56361], 00:34:08.465 | 99.99th=[56361] 00:34:08.465 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1852.63, stdev=65.66, samples=19 00:34:08.465 iops : min= 448, max= 480, avg=463.16, stdev=16.42, samples=19 00:34:08.465 lat (msec) : 50=99.66%, 100=0.34% 00:34:08.465 cpu : usr=97.97%, sys=1.37%, ctx=39, majf=0, minf=29 00:34:08.465 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:08.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.465 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.465 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.465 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.465 filename0: (groupid=0, jobs=1): err= 0: pid=1245497: Sun Dec 8 06:37:57 2024 00:34:08.465 read: IOPS=463, BW=1856KiB/s (1900kB/s)(18.1MiB/10001msec) 00:34:08.465 slat (usec): min=6, max=118, avg=49.96, stdev=23.02 00:34:08.465 clat (usec): min=20201, max=56534, avg=34034.74, stdev=1862.73 00:34:08.465 lat (usec): min=20233, max=56551, avg=34084.70, stdev=1859.14 00:34:08.465 clat percentiles (usec): 00:34:08.465 | 1.00th=[32113], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:34:08.465 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:34:08.465 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.465 | 99.00th=[39584], 99.50th=[44303], 99.90th=[56361], 99.95th=[56361], 00:34:08.465 | 99.99th=[56361] 00:34:08.465 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1852.63, stdev=65.66, samples=19 00:34:08.465 iops : min= 448, max= 480, avg=463.16, stdev=16.42, samples=19 00:34:08.465 lat (msec) : 50=99.66%, 100=0.34% 00:34:08.465 cpu : usr=98.07%, sys=1.44%, ctx=26, majf=0, minf=27 00:34:08.465 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.465 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.465 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.465 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.465 filename0: (groupid=0, jobs=1): err= 0: pid=1245498: Sun Dec 8 06:37:57 2024 00:34:08.465 read: IOPS=464, BW=1856KiB/s (1901kB/s)(18.1MiB/10000msec) 00:34:08.465 slat (usec): min=11, max=114, avg=44.00, stdev=18.10 00:34:08.465 clat (usec): min=20074, max=56157, avg=34089.17, stdev=1826.29 00:34:08.465 lat (usec): min=20110, max=56195, avg=34133.16, stdev=1824.77 00:34:08.465 clat percentiles (usec): 00:34:08.465 | 1.00th=[31851], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:34:08.465 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:34:08.465 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.465 | 99.00th=[39584], 99.50th=[44303], 99.90th=[55837], 99.95th=[55837], 00:34:08.465 | 99.99th=[56361] 00:34:08.465 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1852.79, stdev=65.51, samples=19 00:34:08.465 iops : min= 448, max= 480, avg=463.16, stdev=16.42, samples=19 00:34:08.465 lat (msec) : 50=99.66%, 100=0.34% 00:34:08.465 cpu : usr=97.59%, sys=1.44%, ctx=200, majf=0, minf=40 00:34:08.465 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.465 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.465 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.465 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.465 filename0: (groupid=0, jobs=1): err= 0: pid=1245499: Sun Dec 8 06:37:57 2024 00:34:08.465 read: IOPS=465, BW=1860KiB/s (1905kB/s)(18.2MiB/10012msec) 00:34:08.465 slat (nsec): min=5542, max=84424, avg=39553.91, stdev=10733.65 00:34:08.465 clat (usec): min=17776, max=42592, avg=34045.60, stdev=1383.24 00:34:08.465 lat (usec): min=17801, max=42637, avg=34085.15, stdev=1383.31 00:34:08.465 clat percentiles (usec): 00:34:08.465 | 1.00th=[31851], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:34:08.465 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:08.465 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.465 | 99.00th=[37487], 99.50th=[41681], 99.90th=[42206], 99.95th=[42730], 00:34:08.465 | 99.99th=[42730] 00:34:08.465 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1852.63, stdev=65.66, samples=19 00:34:08.465 iops : min= 448, max= 480, avg=463.16, stdev=16.42, samples=19 00:34:08.465 lat (msec) : 20=0.34%, 50=99.66% 00:34:08.465 cpu : usr=98.04%, sys=1.37%, ctx=40, majf=0, minf=33 00:34:08.465 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.465 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.465 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.465 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.465 filename0: (groupid=0, jobs=1): err= 0: pid=1245500: Sun Dec 8 06:37:57 2024 00:34:08.465 read: IOPS=469, BW=1880KiB/s (1925kB/s)(18.4MiB/10011msec) 00:34:08.465 slat (usec): min=6, max=145, avg=38.06, stdev=26.72 00:34:08.465 clat (usec): min=2212, max=42820, avg=33740.52, stdev=3248.85 00:34:08.465 lat (usec): min=2259, max=42839, avg=33778.57, stdev=3246.56 00:34:08.465 clat percentiles (usec): 00:34:08.465 | 1.00th=[14484], 5.00th=[32637], 10.00th=[33424], 20.00th=[33817], 00:34:08.465 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:08.465 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.465 | 99.00th=[38011], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:34:08.465 | 99.99th=[42730] 00:34:08.465 bw ( KiB/s): min= 1792, max= 2304, per=4.21%, avg=1875.20, stdev=119.46, samples=20 00:34:08.465 iops : min= 448, max= 576, avg=468.80, stdev=29.87, samples=20 00:34:08.465 lat (msec) : 4=0.55%, 10=0.13%, 20=0.68%, 50=98.64% 00:34:08.465 cpu : usr=95.69%, sys=2.57%, ctx=285, majf=0, minf=27 00:34:08.465 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:08.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.465 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.465 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.465 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.465 filename0: (groupid=0, jobs=1): err= 0: pid=1245501: Sun Dec 8 06:37:57 2024 00:34:08.465 read: IOPS=466, BW=1866KiB/s (1910kB/s)(18.2MiB/10017msec) 00:34:08.465 slat (usec): min=9, max=109, avg=47.59, stdev=18.45 00:34:08.466 clat (usec): min=12616, max=42672, avg=33881.48, stdev=1815.08 00:34:08.466 lat (usec): min=12633, max=42697, avg=33929.07, stdev=1815.02 00:34:08.466 clat percentiles (usec): 00:34:08.466 | 1.00th=[25297], 5.00th=[32637], 10.00th=[33424], 20.00th=[33424], 00:34:08.466 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:34:08.466 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.466 | 99.00th=[38011], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:34:08.466 | 99.99th=[42730] 00:34:08.466 bw ( KiB/s): min= 1664, max= 2048, per=4.18%, avg=1862.40, stdev=87.85, samples=20 00:34:08.466 iops : min= 416, max= 512, avg=465.60, stdev=21.96, samples=20 00:34:08.466 lat (msec) : 20=0.34%, 50=99.66% 00:34:08.466 cpu : usr=97.35%, sys=1.75%, ctx=111, majf=0, minf=38 00:34:08.466 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:08.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.466 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.466 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.466 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.466 filename0: (groupid=0, jobs=1): err= 0: pid=1245502: Sun Dec 8 06:37:57 2024 00:34:08.466 read: IOPS=466, BW=1866KiB/s (1910kB/s)(18.2MiB/10017msec) 00:34:08.466 slat (usec): min=10, max=117, avg=58.03, stdev=21.06 00:34:08.466 clat (usec): min=13897, max=45273, avg=33787.81, stdev=1811.02 00:34:08.466 lat (usec): min=13920, max=45305, avg=33845.84, stdev=1813.07 00:34:08.466 clat percentiles (usec): 00:34:08.466 | 1.00th=[23725], 5.00th=[32637], 10.00th=[33162], 20.00th=[33424], 00:34:08.466 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:34:08.466 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[35390], 00:34:08.466 | 99.00th=[38011], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:34:08.466 | 99.99th=[45351] 00:34:08.466 bw ( KiB/s): min= 1664, max= 2048, per=4.18%, avg=1862.40, stdev=87.85, samples=20 00:34:08.466 iops : min= 416, max= 512, avg=465.60, stdev=21.96, samples=20 00:34:08.466 lat (msec) : 20=0.30%, 50=99.70% 00:34:08.466 cpu : usr=96.78%, sys=2.01%, ctx=404, majf=0, minf=29 00:34:08.466 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:08.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.466 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.466 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.466 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.466 filename1: (groupid=0, jobs=1): err= 0: pid=1245503: Sun Dec 8 06:37:57 2024 00:34:08.466 read: IOPS=463, BW=1856KiB/s (1900kB/s)(18.1MiB/10002msec) 00:34:08.466 slat (usec): min=7, max=106, avg=28.60, stdev=14.80 00:34:08.466 clat (usec): min=26345, max=49807, avg=34212.40, stdev=1433.75 00:34:08.466 lat (usec): min=26354, max=49833, avg=34241.00, stdev=1431.42 00:34:08.466 clat percentiles (usec): 00:34:08.466 | 1.00th=[31851], 5.00th=[32900], 10.00th=[33817], 20.00th=[33817], 00:34:08.466 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:34:08.466 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.466 | 99.00th=[38536], 99.50th=[45351], 99.90th=[49546], 99.95th=[49546], 00:34:08.466 | 99.99th=[50070] 00:34:08.466 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1852.79, stdev=65.51, samples=19 00:34:08.466 iops : min= 448, max= 480, avg=463.16, stdev=16.42, samples=19 00:34:08.466 lat (msec) : 50=100.00% 00:34:08.466 cpu : usr=98.17%, sys=1.22%, ctx=74, majf=0, minf=46 00:34:08.466 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.466 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.466 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.466 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.466 filename1: (groupid=0, jobs=1): err= 0: pid=1245504: Sun Dec 8 06:37:57 2024 00:34:08.466 read: IOPS=469, BW=1877KiB/s (1922kB/s)(18.4MiB/10023msec) 00:34:08.466 slat (nsec): min=5400, max=73997, avg=21038.29, stdev=11200.51 00:34:08.466 clat (usec): min=5800, max=45727, avg=33917.96, stdev=2846.51 00:34:08.466 lat (usec): min=5805, max=45745, avg=33939.00, stdev=2846.12 00:34:08.466 clat percentiles (usec): 00:34:08.466 | 1.00th=[19006], 5.00th=[32637], 10.00th=[33817], 20.00th=[33817], 00:34:08.466 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:34:08.466 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.466 | 99.00th=[37487], 99.50th=[39060], 99.90th=[45876], 99.95th=[45876], 00:34:08.466 | 99.99th=[45876] 00:34:08.466 bw ( KiB/s): min= 1792, max= 2180, per=4.21%, avg=1875.40, stdev=96.05, samples=20 00:34:08.466 iops : min= 448, max= 545, avg=468.85, stdev=24.01, samples=20 00:34:08.466 lat (msec) : 10=0.34%, 20=1.36%, 50=98.30% 00:34:08.466 cpu : usr=97.26%, sys=1.78%, ctx=117, majf=0, minf=33 00:34:08.466 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:08.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.466 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.466 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.466 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.466 filename1: (groupid=0, jobs=1): err= 0: pid=1245505: Sun Dec 8 06:37:57 2024 00:34:08.466 read: IOPS=466, BW=1866KiB/s (1910kB/s)(18.2MiB/10017msec) 00:34:08.466 slat (nsec): min=8395, max=98832, avg=39068.54, stdev=13377.01 00:34:08.466 clat (usec): min=14038, max=42604, avg=33963.02, stdev=1800.89 00:34:08.466 lat (usec): min=14078, max=42645, avg=34002.09, stdev=1802.04 00:34:08.466 clat percentiles (usec): 00:34:08.466 | 1.00th=[23200], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:34:08.466 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:08.466 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.466 | 99.00th=[38011], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:34:08.466 | 99.99th=[42730] 00:34:08.466 bw ( KiB/s): min= 1664, max= 2048, per=4.18%, avg=1862.40, stdev=87.85, samples=20 00:34:08.466 iops : min= 416, max= 512, avg=465.60, stdev=21.96, samples=20 00:34:08.466 lat (msec) : 20=0.34%, 50=99.66% 00:34:08.466 cpu : usr=98.28%, sys=1.31%, ctx=15, majf=0, minf=21 00:34:08.466 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.466 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.466 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.466 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.466 filename1: (groupid=0, jobs=1): err= 0: pid=1245506: Sun Dec 8 06:37:57 2024 00:34:08.466 read: IOPS=463, BW=1855KiB/s (1900kB/s)(18.1MiB/10004msec) 00:34:08.466 slat (usec): min=10, max=113, avg=44.40, stdev=19.94 00:34:08.466 clat (usec): min=19037, max=59229, avg=34106.43, stdev=1991.69 00:34:08.466 lat (usec): min=19097, max=59258, avg=34150.83, stdev=1988.06 00:34:08.466 clat percentiles (usec): 00:34:08.466 | 1.00th=[32113], 5.00th=[32900], 10.00th=[33162], 20.00th=[33817], 00:34:08.466 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:08.466 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.466 | 99.00th=[39584], 99.50th=[44303], 99.90th=[58983], 99.95th=[58983], 00:34:08.466 | 99.99th=[58983] 00:34:08.466 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1845.89, stdev=64.93, samples=19 00:34:08.466 iops : min= 448, max= 480, avg=461.47, stdev=16.23, samples=19 00:34:08.466 lat (msec) : 20=0.32%, 50=99.33%, 100=0.34% 00:34:08.466 cpu : usr=98.14%, sys=1.26%, ctx=40, majf=0, minf=30 00:34:08.466 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.466 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.466 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.466 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.466 filename1: (groupid=0, jobs=1): err= 0: pid=1245507: Sun Dec 8 06:37:57 2024 00:34:08.466 read: IOPS=464, BW=1860KiB/s (1905kB/s)(18.2MiB/10011msec) 00:34:08.466 slat (nsec): min=8089, max=93883, avg=33023.09, stdev=16075.37 00:34:08.466 clat (usec): min=10054, max=48384, avg=34106.99, stdev=1937.92 00:34:08.466 lat (usec): min=10062, max=48426, avg=34140.01, stdev=1937.20 00:34:08.466 clat percentiles (usec): 00:34:08.466 | 1.00th=[31589], 5.00th=[32900], 10.00th=[33817], 20.00th=[33817], 00:34:08.466 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:08.466 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.466 | 99.00th=[38536], 99.50th=[45351], 99.90th=[48497], 99.95th=[48497], 00:34:08.466 | 99.99th=[48497] 00:34:08.466 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1852.63, stdev=65.66, samples=19 00:34:08.466 iops : min= 448, max= 480, avg=463.16, stdev=16.42, samples=19 00:34:08.466 lat (msec) : 20=0.32%, 50=99.68% 00:34:08.466 cpu : usr=98.33%, sys=1.17%, ctx=22, majf=0, minf=27 00:34:08.466 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.466 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.466 issued rwts: total=4655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.466 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.466 filename1: (groupid=0, jobs=1): err= 0: pid=1245508: Sun Dec 8 06:37:57 2024 00:34:08.466 read: IOPS=463, BW=1855KiB/s (1900kB/s)(18.1MiB/10003msec) 00:34:08.466 slat (usec): min=8, max=103, avg=38.98, stdev=12.09 00:34:08.466 clat (usec): min=17837, max=60782, avg=34126.68, stdev=2067.31 00:34:08.466 lat (usec): min=17850, max=60822, avg=34165.66, stdev=2067.70 00:34:08.466 clat percentiles (usec): 00:34:08.466 | 1.00th=[32113], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:34:08.466 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:34:08.466 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.466 | 99.00th=[41681], 99.50th=[42206], 99.90th=[60556], 99.95th=[60556], 00:34:08.466 | 99.99th=[60556] 00:34:08.466 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1846.05, stdev=64.79, samples=19 00:34:08.467 iops : min= 448, max= 480, avg=461.47, stdev=16.23, samples=19 00:34:08.467 lat (msec) : 20=0.34%, 50=99.31%, 100=0.34% 00:34:08.467 cpu : usr=97.91%, sys=1.44%, ctx=81, majf=0, minf=30 00:34:08.467 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.467 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.467 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.467 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.467 filename1: (groupid=0, jobs=1): err= 0: pid=1245509: Sun Dec 8 06:37:57 2024 00:34:08.467 read: IOPS=466, BW=1866KiB/s (1911kB/s)(18.2MiB/10016msec) 00:34:08.467 slat (usec): min=8, max=149, avg=42.89, stdev=17.21 00:34:08.467 clat (usec): min=13737, max=42663, avg=33948.92, stdev=1824.20 00:34:08.467 lat (usec): min=13820, max=42690, avg=33991.81, stdev=1822.55 00:34:08.467 clat percentiles (usec): 00:34:08.467 | 1.00th=[23462], 5.00th=[32637], 10.00th=[33424], 20.00th=[33817], 00:34:08.467 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:08.467 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.467 | 99.00th=[38011], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:34:08.467 | 99.99th=[42730] 00:34:08.467 bw ( KiB/s): min= 1664, max= 2048, per=4.18%, avg=1862.40, stdev=87.85, samples=20 00:34:08.467 iops : min= 416, max= 512, avg=465.60, stdev=21.96, samples=20 00:34:08.467 lat (msec) : 20=0.34%, 50=99.66% 00:34:08.467 cpu : usr=97.95%, sys=1.39%, ctx=32, majf=0, minf=32 00:34:08.467 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.467 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.467 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.467 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.467 filename1: (groupid=0, jobs=1): err= 0: pid=1245510: Sun Dec 8 06:37:57 2024 00:34:08.467 read: IOPS=463, BW=1855KiB/s (1900kB/s)(18.1MiB/10004msec) 00:34:08.467 slat (usec): min=8, max=122, avg=31.70, stdev=22.26 00:34:08.467 clat (usec): min=27854, max=52226, avg=34236.16, stdev=1338.02 00:34:08.467 lat (usec): min=27864, max=52261, avg=34267.86, stdev=1335.32 00:34:08.467 clat percentiles (usec): 00:34:08.467 | 1.00th=[31851], 5.00th=[32900], 10.00th=[33817], 20.00th=[33817], 00:34:08.467 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:34:08.467 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.467 | 99.00th=[39584], 99.50th=[44303], 99.90th=[47973], 99.95th=[48497], 00:34:08.467 | 99.99th=[52167] 00:34:08.467 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1852.63, stdev=65.66, samples=19 00:34:08.467 iops : min= 448, max= 480, avg=463.16, stdev=16.42, samples=19 00:34:08.467 lat (msec) : 50=99.96%, 100=0.04% 00:34:08.467 cpu : usr=96.47%, sys=1.97%, ctx=287, majf=0, minf=51 00:34:08.467 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:08.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.467 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.467 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.467 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.467 filename2: (groupid=0, jobs=1): err= 0: pid=1245511: Sun Dec 8 06:37:57 2024 00:34:08.467 read: IOPS=463, BW=1856KiB/s (1900kB/s)(18.1MiB/10002msec) 00:34:08.467 slat (usec): min=8, max=120, avg=56.72, stdev=23.59 00:34:08.467 clat (usec): min=20033, max=57752, avg=33978.34, stdev=1891.71 00:34:08.467 lat (usec): min=20044, max=57781, avg=34035.06, stdev=1891.19 00:34:08.467 clat percentiles (usec): 00:34:08.467 | 1.00th=[31851], 5.00th=[32637], 10.00th=[33162], 20.00th=[33424], 00:34:08.467 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:34:08.467 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[35390], 00:34:08.467 | 99.00th=[39060], 99.50th=[43779], 99.90th=[57410], 99.95th=[57934], 00:34:08.467 | 99.99th=[57934] 00:34:08.467 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1852.63, stdev=65.66, samples=19 00:34:08.467 iops : min= 448, max= 480, avg=463.16, stdev=16.42, samples=19 00:34:08.467 lat (msec) : 50=99.66%, 100=0.34% 00:34:08.467 cpu : usr=95.42%, sys=2.59%, ctx=696, majf=0, minf=28 00:34:08.467 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.467 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.467 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.467 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.467 filename2: (groupid=0, jobs=1): err= 0: pid=1245512: Sun Dec 8 06:37:57 2024 00:34:08.467 read: IOPS=465, BW=1863KiB/s (1908kB/s)(18.2MiB/10025msec) 00:34:08.467 slat (usec): min=8, max=120, avg=55.82, stdev=24.47 00:34:08.467 clat (usec): min=16416, max=45734, avg=33852.35, stdev=1575.37 00:34:08.467 lat (usec): min=16463, max=45751, avg=33908.17, stdev=1571.01 00:34:08.467 clat percentiles (usec): 00:34:08.467 | 1.00th=[28705], 5.00th=[32637], 10.00th=[32900], 20.00th=[33424], 00:34:08.467 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:34:08.467 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.467 | 99.00th=[37487], 99.50th=[39060], 99.90th=[45876], 99.95th=[45876], 00:34:08.467 | 99.99th=[45876] 00:34:08.467 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1861.60, stdev=64.69, samples=20 00:34:08.467 iops : min= 448, max= 480, avg=465.40, stdev=16.17, samples=20 00:34:08.467 lat (msec) : 20=0.30%, 50=99.70% 00:34:08.467 cpu : usr=97.35%, sys=1.68%, ctx=74, majf=0, minf=47 00:34:08.467 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:08.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.467 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.467 issued rwts: total=4670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.467 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.467 filename2: (groupid=0, jobs=1): err= 0: pid=1245513: Sun Dec 8 06:37:57 2024 00:34:08.467 read: IOPS=464, BW=1860KiB/s (1904kB/s)(18.2MiB/10015msec) 00:34:08.467 slat (usec): min=4, max=105, avg=41.94, stdev=14.69 00:34:08.467 clat (usec): min=17886, max=52720, avg=34038.12, stdev=1542.38 00:34:08.467 lat (usec): min=17922, max=52735, avg=34080.07, stdev=1540.99 00:34:08.467 clat percentiles (usec): 00:34:08.467 | 1.00th=[31589], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:34:08.467 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:08.467 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.467 | 99.00th=[40633], 99.50th=[42206], 99.90th=[42730], 99.95th=[45351], 00:34:08.467 | 99.99th=[52691] 00:34:08.467 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1852.63, stdev=65.66, samples=19 00:34:08.467 iops : min= 448, max= 480, avg=463.16, stdev=16.42, samples=19 00:34:08.467 lat (msec) : 20=0.34%, 50=99.61%, 100=0.04% 00:34:08.467 cpu : usr=97.63%, sys=1.57%, ctx=89, majf=0, minf=31 00:34:08.467 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:08.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.467 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.467 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.467 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.467 filename2: (groupid=0, jobs=1): err= 0: pid=1245514: Sun Dec 8 06:37:57 2024 00:34:08.467 read: IOPS=466, BW=1866KiB/s (1910kB/s)(18.2MiB/10017msec) 00:34:08.467 slat (usec): min=7, max=113, avg=53.70, stdev=20.02 00:34:08.467 clat (usec): min=14355, max=42581, avg=33828.71, stdev=1821.54 00:34:08.467 lat (usec): min=14408, max=42615, avg=33882.41, stdev=1820.07 00:34:08.467 clat percentiles (usec): 00:34:08.467 | 1.00th=[23200], 5.00th=[32637], 10.00th=[33162], 20.00th=[33424], 00:34:08.467 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:34:08.467 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.467 | 99.00th=[37487], 99.50th=[41681], 99.90th=[42206], 99.95th=[42730], 00:34:08.467 | 99.99th=[42730] 00:34:08.467 bw ( KiB/s): min= 1664, max= 2048, per=4.18%, avg=1862.40, stdev=87.85, samples=20 00:34:08.467 iops : min= 416, max= 512, avg=465.60, stdev=21.96, samples=20 00:34:08.467 lat (msec) : 20=0.34%, 50=99.66% 00:34:08.467 cpu : usr=97.92%, sys=1.49%, ctx=51, majf=0, minf=23 00:34:08.467 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.467 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.467 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.467 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.467 filename2: (groupid=0, jobs=1): err= 0: pid=1245515: Sun Dec 8 06:37:57 2024 00:34:08.467 read: IOPS=463, BW=1855KiB/s (1900kB/s)(18.1MiB/10003msec) 00:34:08.467 slat (usec): min=8, max=104, avg=40.89, stdev=13.67 00:34:08.467 clat (usec): min=17872, max=60876, avg=34117.69, stdev=2081.33 00:34:08.467 lat (usec): min=17909, max=60916, avg=34158.58, stdev=2080.25 00:34:08.467 clat percentiles (usec): 00:34:08.467 | 1.00th=[31589], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:34:08.467 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:34:08.467 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.467 | 99.00th=[41681], 99.50th=[42206], 99.90th=[60556], 99.95th=[60556], 00:34:08.467 | 99.99th=[61080] 00:34:08.467 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1846.05, stdev=64.79, samples=19 00:34:08.467 iops : min= 448, max= 480, avg=461.47, stdev=16.23, samples=19 00:34:08.467 lat (msec) : 20=0.34%, 50=99.31%, 100=0.34% 00:34:08.467 cpu : usr=98.14%, sys=1.36%, ctx=57, majf=0, minf=26 00:34:08.467 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.467 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.467 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.467 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.467 filename2: (groupid=0, jobs=1): err= 0: pid=1245516: Sun Dec 8 06:37:57 2024 00:34:08.467 read: IOPS=463, BW=1856KiB/s (1900kB/s)(18.1MiB/10001msec) 00:34:08.467 slat (usec): min=8, max=124, avg=35.97, stdev=11.49 00:34:08.468 clat (usec): min=20057, max=55988, avg=34156.12, stdev=1803.81 00:34:08.468 lat (usec): min=20081, max=56023, avg=34192.09, stdev=1804.01 00:34:08.468 clat percentiles (usec): 00:34:08.468 | 1.00th=[32113], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:34:08.468 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:08.468 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.468 | 99.00th=[39584], 99.50th=[44303], 99.90th=[55837], 99.95th=[55837], 00:34:08.468 | 99.99th=[55837] 00:34:08.468 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1852.79, stdev=65.51, samples=19 00:34:08.468 iops : min= 448, max= 480, avg=463.16, stdev=16.42, samples=19 00:34:08.468 lat (msec) : 50=99.66%, 100=0.34% 00:34:08.468 cpu : usr=98.52%, sys=1.01%, ctx=34, majf=0, minf=25 00:34:08.468 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:08.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.468 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.468 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.468 filename2: (groupid=0, jobs=1): err= 0: pid=1245517: Sun Dec 8 06:37:57 2024 00:34:08.468 read: IOPS=465, BW=1861KiB/s (1905kB/s)(18.2MiB/10009msec) 00:34:08.468 slat (nsec): min=6019, max=90840, avg=30167.29, stdev=15979.95 00:34:08.468 clat (usec): min=12627, max=48358, avg=34154.11, stdev=1863.52 00:34:08.468 lat (usec): min=12643, max=48374, avg=34184.28, stdev=1860.47 00:34:08.468 clat percentiles (usec): 00:34:08.468 | 1.00th=[31851], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:34:08.468 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:34:08.468 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.468 | 99.00th=[41681], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:34:08.468 | 99.99th=[48497] 00:34:08.468 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1856.00, stdev=65.66, samples=20 00:34:08.468 iops : min= 448, max= 480, avg=464.00, stdev=16.42, samples=20 00:34:08.468 lat (msec) : 20=0.34%, 50=99.66% 00:34:08.468 cpu : usr=96.78%, sys=2.03%, ctx=182, majf=0, minf=38 00:34:08.468 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:08.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.468 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.468 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.468 filename2: (groupid=0, jobs=1): err= 0: pid=1245518: Sun Dec 8 06:37:57 2024 00:34:08.468 read: IOPS=463, BW=1856KiB/s (1900kB/s)(18.1MiB/10001msec) 00:34:08.468 slat (nsec): min=7253, max=87119, avg=30827.25, stdev=12576.54 00:34:08.468 clat (usec): min=28222, max=45284, avg=34239.83, stdev=1219.56 00:34:08.468 lat (usec): min=28234, max=45305, avg=34270.66, stdev=1217.79 00:34:08.468 clat percentiles (usec): 00:34:08.468 | 1.00th=[32113], 5.00th=[33162], 10.00th=[33817], 20.00th=[33817], 00:34:08.468 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:08.468 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:34:08.468 | 99.00th=[40109], 99.50th=[44303], 99.90th=[45351], 99.95th=[45351], 00:34:08.468 | 99.99th=[45351] 00:34:08.468 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1852.63, stdev=65.66, samples=19 00:34:08.468 iops : min= 448, max= 480, avg=463.16, stdev=16.42, samples=19 00:34:08.468 lat (msec) : 50=100.00% 00:34:08.468 cpu : usr=97.01%, sys=1.84%, ctx=233, majf=0, minf=40 00:34:08.468 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:08.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.468 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.468 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:08.468 00:34:08.468 Run status group 0 (all jobs): 00:34:08.468 READ: bw=43.5MiB/s (45.7MB/s), 1855KiB/s-1880KiB/s (1900kB/s-1925kB/s), io=436MiB (458MB), run=10000-10025msec 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.468 bdev_null0 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.468 [2024-12-08 06:37:57.564550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.468 bdev_null1 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.468 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:08.469 { 00:34:08.469 "params": { 00:34:08.469 "name": "Nvme$subsystem", 00:34:08.469 "trtype": "$TEST_TRANSPORT", 00:34:08.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.469 "adrfam": "ipv4", 00:34:08.469 "trsvcid": "$NVMF_PORT", 00:34:08.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.469 "hdgst": ${hdgst:-false}, 00:34:08.469 "ddgst": ${ddgst:-false} 00:34:08.469 }, 00:34:08.469 "method": "bdev_nvme_attach_controller" 00:34:08.469 } 00:34:08.469 EOF 00:34:08.469 )") 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:08.469 { 00:34:08.469 "params": { 00:34:08.469 "name": "Nvme$subsystem", 00:34:08.469 "trtype": "$TEST_TRANSPORT", 00:34:08.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.469 "adrfam": "ipv4", 00:34:08.469 "trsvcid": "$NVMF_PORT", 00:34:08.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.469 "hdgst": ${hdgst:-false}, 00:34:08.469 "ddgst": ${ddgst:-false} 00:34:08.469 }, 00:34:08.469 "method": "bdev_nvme_attach_controller" 00:34:08.469 } 00:34:08.469 EOF 00:34:08.469 )") 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:08.469 "params": { 00:34:08.469 "name": "Nvme0", 00:34:08.469 "trtype": "tcp", 00:34:08.469 "traddr": "10.0.0.2", 00:34:08.469 "adrfam": "ipv4", 00:34:08.469 "trsvcid": "4420", 00:34:08.469 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:08.469 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:08.469 "hdgst": false, 00:34:08.469 "ddgst": false 00:34:08.469 }, 00:34:08.469 "method": "bdev_nvme_attach_controller" 00:34:08.469 },{ 00:34:08.469 "params": { 00:34:08.469 "name": "Nvme1", 00:34:08.469 "trtype": "tcp", 00:34:08.469 "traddr": "10.0.0.2", 00:34:08.469 "adrfam": "ipv4", 00:34:08.469 "trsvcid": "4420", 00:34:08.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:08.469 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:08.469 "hdgst": false, 00:34:08.469 "ddgst": false 00:34:08.469 }, 00:34:08.469 "method": "bdev_nvme_attach_controller" 00:34:08.469 }' 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:08.469 06:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.469 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:08.469 ... 00:34:08.469 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:08.469 ... 00:34:08.469 fio-3.35 00:34:08.469 Starting 4 threads 00:34:13.731 00:34:13.731 filename0: (groupid=0, jobs=1): err= 0: pid=1246787: Sun Dec 8 06:38:03 2024 00:34:13.731 read: IOPS=1881, BW=14.7MiB/s (15.4MB/s)(73.5MiB/5001msec) 00:34:13.731 slat (nsec): min=7089, max=72026, avg=21027.51, stdev=9743.89 00:34:13.731 clat (usec): min=796, max=7694, avg=4177.16, stdev=613.40 00:34:13.731 lat (usec): min=814, max=7706, avg=4198.19, stdev=613.12 00:34:13.731 clat percentiles (usec): 00:34:13.731 | 1.00th=[ 2606], 5.00th=[ 3425], 10.00th=[ 3687], 20.00th=[ 3884], 00:34:13.731 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:34:13.731 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4752], 95.00th=[ 5276], 00:34:13.731 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[ 7373], 99.95th=[ 7504], 00:34:13.731 | 99.99th=[ 7701] 00:34:13.731 bw ( KiB/s): min=14720, max=15648, per=24.53%, avg=15068.44, stdev=308.47, samples=9 00:34:13.731 iops : min= 1840, max= 1956, avg=1883.56, stdev=38.56, samples=9 00:34:13.731 lat (usec) : 1000=0.03% 00:34:13.731 lat (msec) : 2=0.52%, 4=32.97%, 10=66.48% 00:34:13.731 cpu : usr=94.66%, sys=4.76%, ctx=26, majf=0, minf=9 00:34:13.731 IO depths : 1=0.3%, 2=17.3%, 4=55.6%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.731 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.731 issued rwts: total=9408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.731 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:13.731 filename0: (groupid=0, jobs=1): err= 0: pid=1246788: Sun Dec 8 06:38:03 2024 00:34:13.731 read: IOPS=1934, BW=15.1MiB/s (15.8MB/s)(75.6MiB/5002msec) 00:34:13.731 slat (nsec): min=7843, max=82623, avg=18122.02, stdev=10361.95 00:34:13.731 clat (usec): min=701, max=7709, avg=4071.09, stdev=549.42 00:34:13.731 lat (usec): min=720, max=7726, avg=4089.21, stdev=549.97 00:34:13.731 clat percentiles (usec): 00:34:13.731 | 1.00th=[ 2638], 5.00th=[ 3294], 10.00th=[ 3523], 20.00th=[ 3785], 00:34:13.731 | 30.00th=[ 3884], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4146], 00:34:13.731 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4883], 00:34:13.731 | 99.00th=[ 6194], 99.50th=[ 6587], 99.90th=[ 7308], 99.95th=[ 7570], 00:34:13.731 | 99.99th=[ 7701] 00:34:13.731 bw ( KiB/s): min=14928, max=16768, per=25.19%, avg=15471.80, stdev=538.40, samples=10 00:34:13.731 iops : min= 1866, max= 2096, avg=1933.90, stdev=67.35, samples=10 00:34:13.731 lat (usec) : 750=0.01%, 1000=0.02% 00:34:13.731 lat (msec) : 2=0.41%, 4=41.21%, 10=58.35% 00:34:13.731 cpu : usr=95.26%, sys=3.96%, ctx=90, majf=0, minf=9 00:34:13.731 IO depths : 1=0.3%, 2=17.0%, 4=55.7%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.731 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.731 issued rwts: total=9676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.731 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:13.731 filename1: (groupid=0, jobs=1): err= 0: pid=1246789: Sun Dec 8 06:38:03 2024 00:34:13.731 read: IOPS=1954, BW=15.3MiB/s (16.0MB/s)(76.4MiB/5003msec) 00:34:13.731 slat (nsec): min=7740, max=70200, avg=15204.85, stdev=8649.98 00:34:13.731 clat (usec): min=947, max=7744, avg=4042.55, stdev=530.21 00:34:13.731 lat (usec): min=961, max=7761, avg=4057.76, stdev=530.85 00:34:13.731 clat percentiles (usec): 00:34:13.731 | 1.00th=[ 2409], 5.00th=[ 3261], 10.00th=[ 3490], 20.00th=[ 3720], 00:34:13.731 | 30.00th=[ 3884], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4178], 00:34:13.731 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4752], 00:34:13.731 | 99.00th=[ 5669], 99.50th=[ 6128], 99.90th=[ 7177], 99.95th=[ 7373], 00:34:13.731 | 99.99th=[ 7767] 00:34:13.731 bw ( KiB/s): min=14848, max=17104, per=25.46%, avg=15640.00, stdev=637.21, samples=10 00:34:13.731 iops : min= 1856, max= 2138, avg=1955.00, stdev=79.65, samples=10 00:34:13.731 lat (usec) : 1000=0.01% 00:34:13.731 lat (msec) : 2=0.66%, 4=40.21%, 10=59.11% 00:34:13.731 cpu : usr=94.58%, sys=4.42%, ctx=175, majf=0, minf=0 00:34:13.731 IO depths : 1=0.4%, 2=11.7%, 4=59.9%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.731 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.731 issued rwts: total=9780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.731 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:13.731 filename1: (groupid=0, jobs=1): err= 0: pid=1246790: Sun Dec 8 06:38:03 2024 00:34:13.731 read: IOPS=1909, BW=14.9MiB/s (15.6MB/s)(74.6MiB/5002msec) 00:34:13.731 slat (nsec): min=7568, max=82591, avg=20712.22, stdev=10589.03 00:34:13.731 clat (usec): min=729, max=7913, avg=4111.90, stdev=612.86 00:34:13.731 lat (usec): min=745, max=7929, avg=4132.61, stdev=613.13 00:34:13.731 clat percentiles (usec): 00:34:13.731 | 1.00th=[ 2311], 5.00th=[ 3326], 10.00th=[ 3556], 20.00th=[ 3818], 00:34:13.731 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4178], 00:34:13.731 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 5080], 00:34:13.731 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7504], 99.95th=[ 7767], 00:34:13.731 | 99.99th=[ 7898] 00:34:13.731 bw ( KiB/s): min=14528, max=16144, per=24.90%, avg=15294.00, stdev=514.66, samples=9 00:34:13.731 iops : min= 1816, max= 2018, avg=1911.67, stdev=64.32, samples=9 00:34:13.731 lat (usec) : 750=0.02%, 1000=0.09% 00:34:13.731 lat (msec) : 2=0.57%, 4=38.20%, 10=61.12% 00:34:13.731 cpu : usr=94.92%, sys=4.02%, ctx=172, majf=0, minf=9 00:34:13.731 IO depths : 1=0.2%, 2=18.7%, 4=54.4%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.731 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.731 issued rwts: total=9550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.732 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:13.732 00:34:13.732 Run status group 0 (all jobs): 00:34:13.732 READ: bw=60.0MiB/s (62.9MB/s), 14.7MiB/s-15.3MiB/s (15.4MB/s-16.0MB/s), io=300MiB (315MB), run=5001-5003msec 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.991 00:34:13.991 real 0m24.580s 00:34:13.991 user 4m32.579s 00:34:13.991 sys 0m6.519s 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:13.991 06:38:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:13.991 ************************************ 00:34:13.991 END TEST fio_dif_rand_params 00:34:13.991 ************************************ 00:34:13.991 06:38:04 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:13.991 06:38:04 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:13.991 06:38:04 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:13.991 06:38:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:13.991 ************************************ 00:34:13.991 START TEST fio_dif_digest 00:34:13.991 ************************************ 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.991 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:14.251 bdev_null0 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:14.252 [2024-12-08 06:38:04.133470] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:14.252 { 00:34:14.252 "params": { 00:34:14.252 "name": "Nvme$subsystem", 00:34:14.252 "trtype": "$TEST_TRANSPORT", 00:34:14.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:14.252 "adrfam": "ipv4", 00:34:14.252 "trsvcid": "$NVMF_PORT", 00:34:14.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:14.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:14.252 "hdgst": ${hdgst:-false}, 00:34:14.252 "ddgst": ${ddgst:-false} 00:34:14.252 }, 00:34:14.252 "method": "bdev_nvme_attach_controller" 00:34:14.252 } 00:34:14.252 EOF 00:34:14.252 )") 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:14.252 "params": { 00:34:14.252 "name": "Nvme0", 00:34:14.252 "trtype": "tcp", 00:34:14.252 "traddr": "10.0.0.2", 00:34:14.252 "adrfam": "ipv4", 00:34:14.252 "trsvcid": "4420", 00:34:14.252 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:14.252 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:14.252 "hdgst": true, 00:34:14.252 "ddgst": true 00:34:14.252 }, 00:34:14.252 "method": "bdev_nvme_attach_controller" 00:34:14.252 }' 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:14.252 06:38:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:14.511 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:14.511 ... 00:34:14.511 fio-3.35 00:34:14.511 Starting 3 threads 00:34:26.798 00:34:26.798 filename0: (groupid=0, jobs=1): err= 0: pid=1247656: Sun Dec 8 06:38:15 2024 00:34:26.798 read: IOPS=228, BW=28.5MiB/s (29.9MB/s)(287MiB/10046msec) 00:34:26.798 slat (nsec): min=5634, max=59923, avg=18801.88, stdev=5631.07 00:34:26.798 clat (usec): min=9255, max=55203, avg=13102.02, stdev=1522.51 00:34:26.798 lat (usec): min=9270, max=55258, avg=13120.83, stdev=1523.18 00:34:26.798 clat percentiles (usec): 00:34:26.798 | 1.00th=[10945], 5.00th=[11469], 10.00th=[11863], 20.00th=[12256], 00:34:26.798 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:34:26.798 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14353], 95.00th=[14746], 00:34:26.798 | 99.00th=[15664], 99.50th=[16057], 99.90th=[17433], 99.95th=[50070], 00:34:26.798 | 99.99th=[55313] 00:34:26.798 bw ( KiB/s): min=28160, max=30464, per=35.34%, avg=29324.80, stdev=692.30, samples=20 00:34:26.798 iops : min= 220, max= 238, avg=229.10, stdev= 5.41, samples=20 00:34:26.798 lat (msec) : 10=0.13%, 20=99.78%, 100=0.09% 00:34:26.798 cpu : usr=92.16%, sys=6.68%, ctx=136, majf=0, minf=170 00:34:26.798 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:26.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.798 issued rwts: total=2293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.798 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:26.798 filename0: (groupid=0, jobs=1): err= 0: pid=1247657: Sun Dec 8 06:38:15 2024 00:34:26.798 read: IOPS=210, BW=26.4MiB/s (27.6MB/s)(265MiB/10045msec) 00:34:26.798 slat (nsec): min=5371, max=41070, avg=17051.85, stdev=3817.34 00:34:26.798 clat (usec): min=10131, max=50721, avg=14188.40, stdev=1501.61 00:34:26.798 lat (usec): min=10146, max=50735, avg=14205.45, stdev=1501.58 00:34:26.798 clat percentiles (usec): 00:34:26.798 | 1.00th=[11731], 5.00th=[12518], 10.00th=[12911], 20.00th=[13304], 00:34:26.798 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:34:26.798 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15533], 95.00th=[16057], 00:34:26.798 | 99.00th=[17171], 99.50th=[17695], 99.90th=[19530], 99.95th=[45876], 00:34:26.798 | 99.99th=[50594] 00:34:26.798 bw ( KiB/s): min=26112, max=27904, per=32.64%, avg=27084.80, stdev=474.23, samples=20 00:34:26.798 iops : min= 204, max= 218, avg=211.60, stdev= 3.70, samples=20 00:34:26.798 lat (msec) : 20=99.91%, 50=0.05%, 100=0.05% 00:34:26.798 cpu : usr=95.27%, sys=4.21%, ctx=17, majf=0, minf=107 00:34:26.798 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:26.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.798 issued rwts: total=2118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.798 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:26.798 filename0: (groupid=0, jobs=1): err= 0: pid=1247658: Sun Dec 8 06:38:15 2024 00:34:26.798 read: IOPS=209, BW=26.2MiB/s (27.4MB/s)(263MiB/10045msec) 00:34:26.798 slat (nsec): min=5472, max=50443, avg=17064.19, stdev=3854.09 00:34:26.798 clat (usec): min=10930, max=53400, avg=14294.85, stdev=1529.03 00:34:26.798 lat (usec): min=10945, max=53414, avg=14311.91, stdev=1529.20 00:34:26.798 clat percentiles (usec): 00:34:26.798 | 1.00th=[12125], 5.00th=[12780], 10.00th=[13042], 20.00th=[13435], 00:34:26.798 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:34:26.798 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15533], 95.00th=[15926], 00:34:26.798 | 99.00th=[16712], 99.50th=[16909], 99.90th=[20055], 99.95th=[50070], 00:34:26.798 | 99.99th=[53216] 00:34:26.798 bw ( KiB/s): min=25856, max=27904, per=32.39%, avg=26880.00, stdev=454.92, samples=20 00:34:26.798 iops : min= 202, max= 218, avg=210.00, stdev= 3.55, samples=20 00:34:26.798 lat (msec) : 20=99.86%, 50=0.05%, 100=0.10% 00:34:26.798 cpu : usr=94.06%, sys=4.95%, ctx=346, majf=0, minf=147 00:34:26.798 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:26.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.798 issued rwts: total=2102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.798 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:26.798 00:34:26.798 Run status group 0 (all jobs): 00:34:26.798 READ: bw=81.0MiB/s (85.0MB/s), 26.2MiB/s-28.5MiB/s (27.4MB/s-29.9MB/s), io=814MiB (854MB), run=10045-10046msec 00:34:26.798 06:38:15 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:26.798 06:38:15 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:26.798 06:38:15 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:26.798 06:38:15 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:26.798 06:38:15 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:26.798 06:38:15 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:26.798 06:38:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.798 06:38:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:26.798 06:38:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.798 06:38:15 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:26.798 06:38:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.798 06:38:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:26.798 06:38:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.798 00:34:26.798 real 0m11.200s 00:34:26.798 user 0m29.263s 00:34:26.798 sys 0m1.915s 00:34:26.798 06:38:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:26.798 06:38:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:26.798 ************************************ 00:34:26.798 END TEST fio_dif_digest 00:34:26.798 ************************************ 00:34:26.798 06:38:15 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:26.798 06:38:15 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:26.798 06:38:15 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:26.798 06:38:15 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:26.798 06:38:15 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:26.798 06:38:15 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:26.798 06:38:15 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:26.798 06:38:15 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:26.798 rmmod nvme_tcp 00:34:26.798 rmmod nvme_fabrics 00:34:26.798 rmmod nvme_keyring 00:34:26.798 06:38:15 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:26.798 06:38:15 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:26.798 06:38:15 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:26.798 06:38:15 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1241486 ']' 00:34:26.798 06:38:15 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1241486 00:34:26.798 06:38:15 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1241486 ']' 00:34:26.798 06:38:15 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1241486 00:34:26.798 06:38:15 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:26.798 06:38:15 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:26.798 06:38:15 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1241486 00:34:26.798 06:38:15 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:26.798 06:38:15 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:26.798 06:38:15 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1241486' 00:34:26.798 killing process with pid 1241486 00:34:26.798 06:38:15 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1241486 00:34:26.798 06:38:15 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1241486 00:34:26.798 06:38:15 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:26.798 06:38:15 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:26.798 Waiting for block devices as requested 00:34:26.798 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:34:26.798 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:27.058 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:27.058 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:27.058 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:27.318 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:27.318 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:27.318 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:27.318 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:27.578 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:27.578 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:27.578 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:27.578 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:27.838 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:27.838 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:27.838 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:27.838 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:28.099 06:38:18 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:28.099 06:38:18 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:28.099 06:38:18 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:28.099 06:38:18 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:28.099 06:38:18 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:28.099 06:38:18 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:28.099 06:38:18 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:28.099 06:38:18 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:28.099 06:38:18 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.099 06:38:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:28.099 06:38:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.007 06:38:20 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:30.007 00:34:30.007 real 1m7.507s 00:34:30.007 user 6m29.667s 00:34:30.007 sys 0m18.391s 00:34:30.007 06:38:20 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:30.007 06:38:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:30.007 ************************************ 00:34:30.007 END TEST nvmf_dif 00:34:30.007 ************************************ 00:34:30.266 06:38:20 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:30.266 06:38:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:30.266 06:38:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:30.266 06:38:20 -- common/autotest_common.sh@10 -- # set +x 00:34:30.266 ************************************ 00:34:30.266 START TEST nvmf_abort_qd_sizes 00:34:30.266 ************************************ 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:30.266 * Looking for test storage... 00:34:30.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:30.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.266 --rc genhtml_branch_coverage=1 00:34:30.266 --rc genhtml_function_coverage=1 00:34:30.266 --rc genhtml_legend=1 00:34:30.266 --rc geninfo_all_blocks=1 00:34:30.266 --rc geninfo_unexecuted_blocks=1 00:34:30.266 00:34:30.266 ' 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:30.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.266 --rc genhtml_branch_coverage=1 00:34:30.266 --rc genhtml_function_coverage=1 00:34:30.266 --rc genhtml_legend=1 00:34:30.266 --rc geninfo_all_blocks=1 00:34:30.266 --rc geninfo_unexecuted_blocks=1 00:34:30.266 00:34:30.266 ' 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:30.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.266 --rc genhtml_branch_coverage=1 00:34:30.266 --rc genhtml_function_coverage=1 00:34:30.266 --rc genhtml_legend=1 00:34:30.266 --rc geninfo_all_blocks=1 00:34:30.266 --rc geninfo_unexecuted_blocks=1 00:34:30.266 00:34:30.266 ' 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:30.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.266 --rc genhtml_branch_coverage=1 00:34:30.266 --rc genhtml_function_coverage=1 00:34:30.266 --rc genhtml_legend=1 00:34:30.266 --rc geninfo_all_blocks=1 00:34:30.266 --rc geninfo_unexecuted_blocks=1 00:34:30.266 00:34:30.266 ' 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:30.266 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:30.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:30.267 06:38:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:32.798 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:32.798 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:32.798 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:32.798 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:32.798 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:32.798 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:32.799 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:32.799 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:32.799 Found net devices under 0000:84:00.0: cvl_0_0 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:32.799 Found net devices under 0000:84:00.1: cvl_0_1 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:32.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:32.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:34:32.799 00:34:32.799 --- 10.0.0.2 ping statistics --- 00:34:32.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.799 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:32.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:32.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:34:32.799 00:34:32.799 --- 10.0.0.1 ping statistics --- 00:34:32.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.799 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:32.799 06:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:33.739 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:33.739 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:33.739 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:33.739 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:33.739 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:33.739 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:33.739 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:33.739 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:33.739 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:33.739 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:33.739 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:33.739 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:33.739 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:33.739 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:33.739 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:33.739 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:34.673 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1252506 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1252506 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1252506 ']' 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:34.933 06:38:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:34.933 [2024-12-08 06:38:24.935122] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:34:34.933 [2024-12-08 06:38:24.935217] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:34.933 [2024-12-08 06:38:25.008303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:35.192 [2024-12-08 06:38:25.069629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:35.192 [2024-12-08 06:38:25.069695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:35.192 [2024-12-08 06:38:25.069709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:35.192 [2024-12-08 06:38:25.069719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:35.192 [2024-12-08 06:38:25.069755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:35.192 [2024-12-08 06:38:25.071424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.192 [2024-12-08 06:38:25.071488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:35.192 [2024-12-08 06:38:25.071553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:35.192 [2024-12-08 06:38:25.071556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:82:00.0 ]] 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:82:00.0 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:35.192 06:38:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:35.192 ************************************ 00:34:35.192 START TEST spdk_target_abort 00:34:35.192 ************************************ 00:34:35.192 06:38:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:35.192 06:38:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:35.192 06:38:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:34:35.192 06:38:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.192 06:38:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.474 spdk_targetn1 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.474 [2024-12-08 06:38:28.099507] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.474 [2024-12-08 06:38:28.151867] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:38.474 06:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:41.761 Initializing NVMe Controllers 00:34:41.761 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:41.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:41.761 Initialization complete. Launching workers. 00:34:41.761 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11417, failed: 0 00:34:41.761 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1345, failed to submit 10072 00:34:41.761 success 694, unsuccessful 651, failed 0 00:34:41.761 06:38:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:41.761 06:38:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:45.049 Initializing NVMe Controllers 00:34:45.049 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:45.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:45.049 Initialization complete. Launching workers. 00:34:45.049 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8801, failed: 0 00:34:45.049 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1265, failed to submit 7536 00:34:45.049 success 319, unsuccessful 946, failed 0 00:34:45.049 06:38:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:45.049 06:38:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:48.337 Initializing NVMe Controllers 00:34:48.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:48.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:48.337 Initialization complete. Launching workers. 00:34:48.337 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30828, failed: 0 00:34:48.337 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2643, failed to submit 28185 00:34:48.337 success 534, unsuccessful 2109, failed 0 00:34:48.337 06:38:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:48.337 06:38:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.337 06:38:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:48.337 06:38:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.337 06:38:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:48.337 06:38:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.337 06:38:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:49.269 06:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.269 06:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1252506 00:34:49.269 06:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1252506 ']' 00:34:49.269 06:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1252506 00:34:49.269 06:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:49.269 06:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:49.269 06:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1252506 00:34:49.269 06:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:49.269 06:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:49.269 06:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1252506' 00:34:49.269 killing process with pid 1252506 00:34:49.269 06:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1252506 00:34:49.269 06:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1252506 00:34:49.527 00:34:49.527 real 0m14.223s 00:34:49.527 user 0m53.694s 00:34:49.527 sys 0m2.959s 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:49.527 ************************************ 00:34:49.527 END TEST spdk_target_abort 00:34:49.527 ************************************ 00:34:49.527 06:38:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:49.527 06:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:49.527 06:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:49.527 06:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:49.527 ************************************ 00:34:49.527 START TEST kernel_target_abort 00:34:49.527 ************************************ 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:49.527 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:49.528 06:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:50.904 Waiting for block devices as requested 00:34:50.904 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:34:50.904 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:50.904 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:51.162 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:51.162 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:51.163 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:51.423 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:51.423 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:51.423 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:51.423 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:51.681 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:51.681 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:51.681 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:51.941 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:51.941 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:51.941 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:51.941 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:52.201 No valid GPT data, bailing 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:52.201 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:34:52.459 00:34:52.459 Discovery Log Number of Records 2, Generation counter 2 00:34:52.459 =====Discovery Log Entry 0====== 00:34:52.459 trtype: tcp 00:34:52.459 adrfam: ipv4 00:34:52.459 subtype: current discovery subsystem 00:34:52.459 treq: not specified, sq flow control disable supported 00:34:52.459 portid: 1 00:34:52.459 trsvcid: 4420 00:34:52.459 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:52.459 traddr: 10.0.0.1 00:34:52.459 eflags: none 00:34:52.459 sectype: none 00:34:52.459 =====Discovery Log Entry 1====== 00:34:52.459 trtype: tcp 00:34:52.459 adrfam: ipv4 00:34:52.459 subtype: nvme subsystem 00:34:52.459 treq: not specified, sq flow control disable supported 00:34:52.459 portid: 1 00:34:52.459 trsvcid: 4420 00:34:52.459 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:52.459 traddr: 10.0.0.1 00:34:52.459 eflags: none 00:34:52.459 sectype: none 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:52.459 06:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:55.747 Initializing NVMe Controllers 00:34:55.747 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:55.747 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:55.747 Initialization complete. Launching workers. 00:34:55.747 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48703, failed: 0 00:34:55.747 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 48703, failed to submit 0 00:34:55.747 success 0, unsuccessful 48703, failed 0 00:34:55.747 06:38:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:55.747 06:38:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:59.038 Initializing NVMe Controllers 00:34:59.038 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:59.038 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:59.038 Initialization complete. Launching workers. 00:34:59.038 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93425, failed: 0 00:34:59.038 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21758, failed to submit 71667 00:34:59.038 success 0, unsuccessful 21758, failed 0 00:34:59.038 06:38:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:59.038 06:38:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:02.321 Initializing NVMe Controllers 00:35:02.321 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:02.321 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:02.321 Initialization complete. Launching workers. 00:35:02.321 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87470, failed: 0 00:35:02.321 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21826, failed to submit 65644 00:35:02.321 success 0, unsuccessful 21826, failed 0 00:35:02.321 06:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:02.321 06:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:02.321 06:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:02.321 06:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:02.321 06:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:02.321 06:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:02.321 06:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:02.321 06:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:02.321 06:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:02.321 06:38:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:03.274 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:03.274 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:03.274 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:03.274 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:03.274 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:03.274 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:03.274 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:03.274 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:03.274 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:03.274 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:03.274 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:03.274 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:03.274 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:03.274 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:03.274 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:03.274 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:04.212 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:35:04.212 00:35:04.212 real 0m14.675s 00:35:04.212 user 0m6.154s 00:35:04.212 sys 0m3.654s 00:35:04.212 06:38:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:04.212 06:38:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:04.212 ************************************ 00:35:04.212 END TEST kernel_target_abort 00:35:04.212 ************************************ 00:35:04.212 06:38:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:04.212 06:38:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:04.212 06:38:54 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:04.212 06:38:54 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:04.212 06:38:54 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:04.212 06:38:54 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:04.212 06:38:54 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:04.212 06:38:54 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:04.212 rmmod nvme_tcp 00:35:04.212 rmmod nvme_fabrics 00:35:04.212 rmmod nvme_keyring 00:35:04.212 06:38:54 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:04.212 06:38:54 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:04.212 06:38:54 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:04.212 06:38:54 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1252506 ']' 00:35:04.212 06:38:54 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1252506 00:35:04.212 06:38:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1252506 ']' 00:35:04.212 06:38:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1252506 00:35:04.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1252506) - No such process 00:35:04.212 06:38:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1252506 is not found' 00:35:04.212 Process with pid 1252506 is not found 00:35:04.212 06:38:54 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:04.212 06:38:54 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:05.599 Waiting for block devices as requested 00:35:05.599 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:35:05.599 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:05.856 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:05.857 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:05.857 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:06.116 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:06.116 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:06.116 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:06.116 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:06.375 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:06.375 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:06.375 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:06.375 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:06.635 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:06.635 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:06.635 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:06.635 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:06.895 06:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:06.895 06:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:06.895 06:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:06.895 06:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:06.895 06:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:06.895 06:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:06.895 06:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:06.895 06:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:06.895 06:38:56 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.895 06:38:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:06.895 06:38:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:08.863 06:38:58 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:08.863 00:35:08.863 real 0m38.760s 00:35:08.863 user 1m2.205s 00:35:08.863 sys 0m10.219s 00:35:08.863 06:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:08.863 06:38:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:08.863 ************************************ 00:35:08.863 END TEST nvmf_abort_qd_sizes 00:35:08.863 ************************************ 00:35:08.863 06:38:58 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:08.863 06:38:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:08.863 06:38:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:08.863 06:38:58 -- common/autotest_common.sh@10 -- # set +x 00:35:09.122 ************************************ 00:35:09.122 START TEST keyring_file 00:35:09.122 ************************************ 00:35:09.122 06:38:58 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:09.122 * Looking for test storage... 00:35:09.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:09.122 06:38:59 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:09.122 06:38:59 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:35:09.122 06:38:59 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:09.122 06:38:59 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:09.122 06:38:59 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:09.122 06:38:59 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:09.122 06:38:59 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:09.122 06:38:59 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:09.122 06:38:59 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:09.122 06:38:59 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:09.122 06:38:59 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:09.122 06:38:59 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:09.122 06:38:59 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:09.122 06:38:59 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:09.122 06:38:59 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:09.122 06:38:59 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:09.122 06:38:59 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:09.122 06:38:59 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:09.122 06:38:59 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:09.123 06:38:59 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:09.123 06:38:59 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:09.123 06:38:59 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:09.123 06:38:59 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:09.123 06:38:59 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:09.123 06:38:59 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:09.123 06:38:59 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:09.123 06:38:59 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:09.123 06:38:59 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:09.123 06:38:59 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:09.123 06:38:59 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:09.123 06:38:59 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:09.123 06:38:59 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:09.123 06:38:59 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:09.123 06:38:59 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:09.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.123 --rc genhtml_branch_coverage=1 00:35:09.123 --rc genhtml_function_coverage=1 00:35:09.123 --rc genhtml_legend=1 00:35:09.123 --rc geninfo_all_blocks=1 00:35:09.123 --rc geninfo_unexecuted_blocks=1 00:35:09.123 00:35:09.123 ' 00:35:09.123 06:38:59 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:09.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.123 --rc genhtml_branch_coverage=1 00:35:09.123 --rc genhtml_function_coverage=1 00:35:09.123 --rc genhtml_legend=1 00:35:09.123 --rc geninfo_all_blocks=1 00:35:09.123 --rc geninfo_unexecuted_blocks=1 00:35:09.123 00:35:09.123 ' 00:35:09.123 06:38:59 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:09.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.123 --rc genhtml_branch_coverage=1 00:35:09.123 --rc genhtml_function_coverage=1 00:35:09.123 --rc genhtml_legend=1 00:35:09.123 --rc geninfo_all_blocks=1 00:35:09.123 --rc geninfo_unexecuted_blocks=1 00:35:09.123 00:35:09.123 ' 00:35:09.123 06:38:59 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:09.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.123 --rc genhtml_branch_coverage=1 00:35:09.123 --rc genhtml_function_coverage=1 00:35:09.123 --rc genhtml_legend=1 00:35:09.123 --rc geninfo_all_blocks=1 00:35:09.123 --rc geninfo_unexecuted_blocks=1 00:35:09.123 00:35:09.123 ' 00:35:09.123 06:38:59 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:09.123 06:38:59 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:09.123 06:38:59 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:09.123 06:38:59 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:09.123 06:38:59 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:09.123 06:38:59 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.123 06:38:59 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.123 06:38:59 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.123 06:38:59 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:09.123 06:38:59 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:09.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:09.123 06:38:59 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:09.123 06:38:59 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:09.123 06:38:59 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:09.123 06:38:59 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:09.123 06:38:59 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:09.123 06:38:59 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XKsgXoVtwH 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XKsgXoVtwH 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XKsgXoVtwH 00:35:09.123 06:38:59 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.XKsgXoVtwH 00:35:09.123 06:38:59 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MffuKi0gEA 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:09.123 06:38:59 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MffuKi0gEA 00:35:09.123 06:38:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MffuKi0gEA 00:35:09.123 06:38:59 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.MffuKi0gEA 00:35:09.124 06:38:59 keyring_file -- keyring/file.sh@30 -- # tgtpid=1258294 00:35:09.124 06:38:59 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:09.124 06:38:59 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1258294 00:35:09.124 06:38:59 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1258294 ']' 00:35:09.124 06:38:59 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:09.124 06:38:59 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:09.124 06:38:59 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:09.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:09.124 06:38:59 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:09.124 06:38:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:09.382 [2024-12-08 06:38:59.293126] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:35:09.382 [2024-12-08 06:38:59.293231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258294 ] 00:35:09.382 [2024-12-08 06:38:59.359250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.382 [2024-12-08 06:38:59.415859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:09.640 06:38:59 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:09.640 [2024-12-08 06:38:59.657518] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:09.640 null0 00:35:09.640 [2024-12-08 06:38:59.689577] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:09.640 [2024-12-08 06:38:59.689919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.640 06:38:59 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:09.640 [2024-12-08 06:38:59.713617] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:09.640 request: 00:35:09.640 { 00:35:09.640 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:09.640 "secure_channel": false, 00:35:09.640 "listen_address": { 00:35:09.640 "trtype": "tcp", 00:35:09.640 "traddr": "127.0.0.1", 00:35:09.640 "trsvcid": "4420" 00:35:09.640 }, 00:35:09.640 "method": "nvmf_subsystem_add_listener", 00:35:09.640 "req_id": 1 00:35:09.640 } 00:35:09.640 Got JSON-RPC error response 00:35:09.640 response: 00:35:09.640 { 00:35:09.640 "code": -32602, 00:35:09.640 "message": "Invalid parameters" 00:35:09.640 } 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:09.640 06:38:59 keyring_file -- keyring/file.sh@47 -- # bperfpid=1258302 00:35:09.640 06:38:59 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:09.640 06:38:59 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1258302 /var/tmp/bperf.sock 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1258302 ']' 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:09.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:09.640 06:38:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:09.899 [2024-12-08 06:38:59.761388] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:35:09.899 [2024-12-08 06:38:59.761492] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258302 ] 00:35:09.899 [2024-12-08 06:38:59.825647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.899 [2024-12-08 06:38:59.890837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:09.899 06:39:00 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:09.899 06:39:00 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:09.899 06:39:00 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XKsgXoVtwH 00:35:09.899 06:39:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XKsgXoVtwH 00:35:10.466 06:39:00 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.MffuKi0gEA 00:35:10.466 06:39:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.MffuKi0gEA 00:35:10.466 06:39:00 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:10.466 06:39:00 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:10.466 06:39:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.466 06:39:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:10.466 06:39:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.031 06:39:00 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.XKsgXoVtwH == \/\t\m\p\/\t\m\p\.\X\K\s\g\X\o\V\t\w\H ]] 00:35:11.031 06:39:00 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:11.031 06:39:00 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:11.031 06:39:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.031 06:39:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:11.031 06:39:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.289 06:39:01 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.MffuKi0gEA == \/\t\m\p\/\t\m\p\.\M\f\f\u\K\i\0\g\E\A ]] 00:35:11.289 06:39:01 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:11.289 06:39:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:11.289 06:39:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:11.289 06:39:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.289 06:39:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.289 06:39:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:11.548 06:39:01 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:11.548 06:39:01 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:11.548 06:39:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:11.548 06:39:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:11.548 06:39:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.548 06:39:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.548 06:39:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:11.806 06:39:01 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:11.806 06:39:01 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.806 06:39:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:12.064 [2024-12-08 06:39:01.992675] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:12.064 nvme0n1 00:35:12.064 06:39:02 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:12.064 06:39:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:12.064 06:39:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:12.064 06:39:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.064 06:39:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.064 06:39:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:12.322 06:39:02 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:12.322 06:39:02 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:12.322 06:39:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:12.322 06:39:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:12.322 06:39:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.322 06:39:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.322 06:39:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:12.580 06:39:02 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:12.580 06:39:02 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:12.838 Running I/O for 1 seconds... 00:35:13.775 10289.00 IOPS, 40.19 MiB/s 00:35:13.775 Latency(us) 00:35:13.775 [2024-12-08T05:39:03.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.775 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:13.775 nvme0n1 : 1.01 10333.63 40.37 0.00 0.00 12348.26 3859.34 18155.90 00:35:13.775 [2024-12-08T05:39:03.894Z] =================================================================================================================== 00:35:13.775 [2024-12-08T05:39:03.894Z] Total : 10333.63 40.37 0.00 0.00 12348.26 3859.34 18155.90 00:35:13.775 { 00:35:13.775 "results": [ 00:35:13.775 { 00:35:13.775 "job": "nvme0n1", 00:35:13.775 "core_mask": "0x2", 00:35:13.775 "workload": "randrw", 00:35:13.775 "percentage": 50, 00:35:13.775 "status": "finished", 00:35:13.775 "queue_depth": 128, 00:35:13.775 "io_size": 4096, 00:35:13.775 "runtime": 1.008261, 00:35:13.775 "iops": 10333.63385075888, 00:35:13.775 "mibps": 40.36575722952688, 00:35:13.775 "io_failed": 0, 00:35:13.775 "io_timeout": 0, 00:35:13.775 "avg_latency_us": 12348.25923295404, 00:35:13.775 "min_latency_us": 3859.342222222222, 00:35:13.775 "max_latency_us": 18155.89925925926 00:35:13.775 } 00:35:13.775 ], 00:35:13.775 "core_count": 1 00:35:13.775 } 00:35:13.775 06:39:03 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:13.775 06:39:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:14.033 06:39:04 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:14.033 06:39:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:14.033 06:39:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:14.033 06:39:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:14.033 06:39:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:14.033 06:39:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:14.291 06:39:04 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:14.291 06:39:04 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:14.291 06:39:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:14.291 06:39:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:14.291 06:39:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:14.291 06:39:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:14.291 06:39:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:14.549 06:39:04 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:14.549 06:39:04 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:14.549 06:39:04 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:14.549 06:39:04 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:14.549 06:39:04 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:14.549 06:39:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:14.549 06:39:04 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:14.549 06:39:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:14.549 06:39:04 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:14.549 06:39:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:14.806 [2024-12-08 06:39:04.898714] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:14.806 [2024-12-08 06:39:04.899377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246a7b0 (107): Transport endpoint is not connected 00:35:14.806 [2024-12-08 06:39:04.900357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246a7b0 (9): Bad file descriptor 00:35:14.806 [2024-12-08 06:39:04.901356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:14.806 [2024-12-08 06:39:04.901388] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:14.806 [2024-12-08 06:39:04.901418] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:14.806 [2024-12-08 06:39:04.901432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:14.806 request: 00:35:14.806 { 00:35:14.806 "name": "nvme0", 00:35:14.806 "trtype": "tcp", 00:35:14.806 "traddr": "127.0.0.1", 00:35:14.806 "adrfam": "ipv4", 00:35:14.806 "trsvcid": "4420", 00:35:14.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:14.806 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:14.806 "prchk_reftag": false, 00:35:14.806 "prchk_guard": false, 00:35:14.806 "hdgst": false, 00:35:14.806 "ddgst": false, 00:35:14.806 "psk": "key1", 00:35:14.806 "allow_unrecognized_csi": false, 00:35:14.806 "method": "bdev_nvme_attach_controller", 00:35:14.806 "req_id": 1 00:35:14.806 } 00:35:14.806 Got JSON-RPC error response 00:35:14.806 response: 00:35:14.806 { 00:35:14.806 "code": -5, 00:35:14.806 "message": "Input/output error" 00:35:14.806 } 00:35:14.806 06:39:04 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:14.806 06:39:04 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:14.806 06:39:04 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:14.806 06:39:04 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:14.806 06:39:04 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:14.806 06:39:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:14.806 06:39:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:14.806 06:39:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:14.806 06:39:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:14.806 06:39:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.370 06:39:05 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:15.370 06:39:05 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:15.370 06:39:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:15.370 06:39:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:15.370 06:39:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:15.370 06:39:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:15.370 06:39:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.371 06:39:05 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:15.371 06:39:05 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:15.371 06:39:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:15.934 06:39:05 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:15.934 06:39:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:15.934 06:39:06 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:15.934 06:39:06 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:15.934 06:39:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.500 06:39:06 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:16.500 06:39:06 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.XKsgXoVtwH 00:35:16.500 06:39:06 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.XKsgXoVtwH 00:35:16.500 06:39:06 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:16.500 06:39:06 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.XKsgXoVtwH 00:35:16.500 06:39:06 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:16.500 06:39:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:16.500 06:39:06 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:16.500 06:39:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:16.500 06:39:06 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XKsgXoVtwH 00:35:16.500 06:39:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XKsgXoVtwH 00:35:16.500 [2024-12-08 06:39:06.575533] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.XKsgXoVtwH': 0100660 00:35:16.500 [2024-12-08 06:39:06.575575] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:16.500 request: 00:35:16.500 { 00:35:16.500 "name": "key0", 00:35:16.500 "path": "/tmp/tmp.XKsgXoVtwH", 00:35:16.500 "method": "keyring_file_add_key", 00:35:16.500 "req_id": 1 00:35:16.500 } 00:35:16.500 Got JSON-RPC error response 00:35:16.500 response: 00:35:16.500 { 00:35:16.500 "code": -1, 00:35:16.500 "message": "Operation not permitted" 00:35:16.500 } 00:35:16.500 06:39:06 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:16.500 06:39:06 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:16.500 06:39:06 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:16.500 06:39:06 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:16.500 06:39:06 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.XKsgXoVtwH 00:35:16.500 06:39:06 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XKsgXoVtwH 00:35:16.500 06:39:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XKsgXoVtwH 00:35:17.077 06:39:06 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.XKsgXoVtwH 00:35:17.077 06:39:06 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:17.077 06:39:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:17.077 06:39:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:17.077 06:39:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:17.077 06:39:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:17.077 06:39:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:17.077 06:39:07 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:17.077 06:39:07 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:17.078 06:39:07 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:17.078 06:39:07 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:17.078 06:39:07 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:17.078 06:39:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:17.078 06:39:07 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:17.078 06:39:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:17.078 06:39:07 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:17.078 06:39:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:17.334 [2024-12-08 06:39:07.449909] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.XKsgXoVtwH': No such file or directory 00:35:17.334 [2024-12-08 06:39:07.449945] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:17.334 [2024-12-08 06:39:07.449967] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:17.334 [2024-12-08 06:39:07.449981] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:17.334 [2024-12-08 06:39:07.449994] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:17.334 [2024-12-08 06:39:07.450013] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:17.591 request: 00:35:17.592 { 00:35:17.592 "name": "nvme0", 00:35:17.592 "trtype": "tcp", 00:35:17.592 "traddr": "127.0.0.1", 00:35:17.592 "adrfam": "ipv4", 00:35:17.592 "trsvcid": "4420", 00:35:17.592 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:17.592 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:17.592 "prchk_reftag": false, 00:35:17.592 "prchk_guard": false, 00:35:17.592 "hdgst": false, 00:35:17.592 "ddgst": false, 00:35:17.592 "psk": "key0", 00:35:17.592 "allow_unrecognized_csi": false, 00:35:17.592 "method": "bdev_nvme_attach_controller", 00:35:17.592 "req_id": 1 00:35:17.592 } 00:35:17.592 Got JSON-RPC error response 00:35:17.592 response: 00:35:17.592 { 00:35:17.592 "code": -19, 00:35:17.592 "message": "No such device" 00:35:17.592 } 00:35:17.592 06:39:07 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:17.592 06:39:07 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:17.592 06:39:07 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:17.592 06:39:07 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:17.592 06:39:07 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:17.592 06:39:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:17.850 06:39:07 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:17.850 06:39:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:17.850 06:39:07 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:17.850 06:39:07 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:17.850 06:39:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:17.850 06:39:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:17.850 06:39:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.v1ok0xTcLE 00:35:17.850 06:39:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:17.850 06:39:07 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:17.850 06:39:07 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:17.850 06:39:07 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:17.850 06:39:07 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:17.850 06:39:07 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:17.850 06:39:07 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:17.850 06:39:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.v1ok0xTcLE 00:35:17.850 06:39:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.v1ok0xTcLE 00:35:17.850 06:39:07 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.v1ok0xTcLE 00:35:17.850 06:39:07 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.v1ok0xTcLE 00:35:17.850 06:39:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.v1ok0xTcLE 00:35:18.107 06:39:08 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:18.107 06:39:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:18.365 nvme0n1 00:35:18.365 06:39:08 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:18.365 06:39:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:18.365 06:39:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:18.365 06:39:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:18.365 06:39:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:18.365 06:39:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:18.623 06:39:08 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:18.623 06:39:08 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:18.623 06:39:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:19.189 06:39:09 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:19.189 06:39:09 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:19.189 06:39:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:19.189 06:39:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.189 06:39:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:19.189 06:39:09 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:19.189 06:39:09 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:19.189 06:39:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:19.189 06:39:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:19.189 06:39:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:19.189 06:39:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:19.189 06:39:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.756 06:39:09 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:19.756 06:39:09 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:19.756 06:39:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:19.756 06:39:09 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:19.756 06:39:09 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:19.756 06:39:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:20.322 06:39:10 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:20.323 06:39:10 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.v1ok0xTcLE 00:35:20.323 06:39:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.v1ok0xTcLE 00:35:20.323 06:39:10 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.MffuKi0gEA 00:35:20.323 06:39:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.MffuKi0gEA 00:35:20.579 06:39:10 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:20.580 06:39:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:21.143 nvme0n1 00:35:21.143 06:39:11 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:21.143 06:39:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:21.401 06:39:11 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:21.401 "subsystems": [ 00:35:21.401 { 00:35:21.401 "subsystem": "keyring", 00:35:21.401 "config": [ 00:35:21.401 { 00:35:21.401 "method": "keyring_file_add_key", 00:35:21.401 "params": { 00:35:21.401 "name": "key0", 00:35:21.401 "path": "/tmp/tmp.v1ok0xTcLE" 00:35:21.401 } 00:35:21.401 }, 00:35:21.401 { 00:35:21.401 "method": "keyring_file_add_key", 00:35:21.401 "params": { 00:35:21.401 "name": "key1", 00:35:21.401 "path": "/tmp/tmp.MffuKi0gEA" 00:35:21.401 } 00:35:21.401 } 00:35:21.401 ] 00:35:21.401 }, 00:35:21.401 { 00:35:21.401 "subsystem": "iobuf", 00:35:21.401 "config": [ 00:35:21.401 { 00:35:21.401 "method": "iobuf_set_options", 00:35:21.401 "params": { 00:35:21.401 "small_pool_count": 8192, 00:35:21.401 "large_pool_count": 1024, 00:35:21.401 "small_bufsize": 8192, 00:35:21.401 "large_bufsize": 135168, 00:35:21.401 "enable_numa": false 00:35:21.401 } 00:35:21.401 } 00:35:21.401 ] 00:35:21.401 }, 00:35:21.401 { 00:35:21.401 "subsystem": "sock", 00:35:21.401 "config": [ 00:35:21.401 { 00:35:21.401 "method": "sock_set_default_impl", 00:35:21.401 "params": { 00:35:21.401 "impl_name": "posix" 00:35:21.401 } 00:35:21.401 }, 00:35:21.401 { 00:35:21.401 "method": "sock_impl_set_options", 00:35:21.401 "params": { 00:35:21.401 "impl_name": "ssl", 00:35:21.401 "recv_buf_size": 4096, 00:35:21.401 "send_buf_size": 4096, 00:35:21.402 "enable_recv_pipe": true, 00:35:21.402 "enable_quickack": false, 00:35:21.402 "enable_placement_id": 0, 00:35:21.402 "enable_zerocopy_send_server": true, 00:35:21.402 "enable_zerocopy_send_client": false, 00:35:21.402 "zerocopy_threshold": 0, 00:35:21.402 "tls_version": 0, 00:35:21.402 "enable_ktls": false 00:35:21.402 } 00:35:21.402 }, 00:35:21.402 { 00:35:21.402 "method": "sock_impl_set_options", 00:35:21.402 "params": { 00:35:21.402 "impl_name": "posix", 00:35:21.402 "recv_buf_size": 2097152, 00:35:21.402 "send_buf_size": 2097152, 00:35:21.402 "enable_recv_pipe": true, 00:35:21.402 "enable_quickack": false, 00:35:21.402 "enable_placement_id": 0, 00:35:21.402 "enable_zerocopy_send_server": true, 00:35:21.402 "enable_zerocopy_send_client": false, 00:35:21.402 "zerocopy_threshold": 0, 00:35:21.402 "tls_version": 0, 00:35:21.402 "enable_ktls": false 00:35:21.402 } 00:35:21.402 } 00:35:21.402 ] 00:35:21.402 }, 00:35:21.402 { 00:35:21.402 "subsystem": "vmd", 00:35:21.402 "config": [] 00:35:21.402 }, 00:35:21.402 { 00:35:21.402 "subsystem": "accel", 00:35:21.402 "config": [ 00:35:21.402 { 00:35:21.402 "method": "accel_set_options", 00:35:21.402 "params": { 00:35:21.402 "small_cache_size": 128, 00:35:21.402 "large_cache_size": 16, 00:35:21.402 "task_count": 2048, 00:35:21.402 "sequence_count": 2048, 00:35:21.402 "buf_count": 2048 00:35:21.402 } 00:35:21.402 } 00:35:21.402 ] 00:35:21.402 }, 00:35:21.402 { 00:35:21.402 "subsystem": "bdev", 00:35:21.402 "config": [ 00:35:21.402 { 00:35:21.402 "method": "bdev_set_options", 00:35:21.402 "params": { 00:35:21.402 "bdev_io_pool_size": 65535, 00:35:21.402 "bdev_io_cache_size": 256, 00:35:21.402 "bdev_auto_examine": true, 00:35:21.402 "iobuf_small_cache_size": 128, 00:35:21.402 "iobuf_large_cache_size": 16 00:35:21.402 } 00:35:21.402 }, 00:35:21.402 { 00:35:21.402 "method": "bdev_raid_set_options", 00:35:21.402 "params": { 00:35:21.402 "process_window_size_kb": 1024, 00:35:21.402 "process_max_bandwidth_mb_sec": 0 00:35:21.402 } 00:35:21.402 }, 00:35:21.402 { 00:35:21.402 "method": "bdev_iscsi_set_options", 00:35:21.402 "params": { 00:35:21.402 "timeout_sec": 30 00:35:21.402 } 00:35:21.402 }, 00:35:21.402 { 00:35:21.402 "method": "bdev_nvme_set_options", 00:35:21.402 "params": { 00:35:21.402 "action_on_timeout": "none", 00:35:21.402 "timeout_us": 0, 00:35:21.402 "timeout_admin_us": 0, 00:35:21.402 "keep_alive_timeout_ms": 10000, 00:35:21.402 "arbitration_burst": 0, 00:35:21.402 "low_priority_weight": 0, 00:35:21.402 "medium_priority_weight": 0, 00:35:21.402 "high_priority_weight": 0, 00:35:21.402 "nvme_adminq_poll_period_us": 10000, 00:35:21.402 "nvme_ioq_poll_period_us": 0, 00:35:21.402 "io_queue_requests": 512, 00:35:21.402 "delay_cmd_submit": true, 00:35:21.402 "transport_retry_count": 4, 00:35:21.402 "bdev_retry_count": 3, 00:35:21.402 "transport_ack_timeout": 0, 00:35:21.402 "ctrlr_loss_timeout_sec": 0, 00:35:21.402 "reconnect_delay_sec": 0, 00:35:21.402 "fast_io_fail_timeout_sec": 0, 00:35:21.402 "disable_auto_failback": false, 00:35:21.402 "generate_uuids": false, 00:35:21.402 "transport_tos": 0, 00:35:21.402 "nvme_error_stat": false, 00:35:21.402 "rdma_srq_size": 0, 00:35:21.402 "io_path_stat": false, 00:35:21.402 "allow_accel_sequence": false, 00:35:21.402 "rdma_max_cq_size": 0, 00:35:21.402 "rdma_cm_event_timeout_ms": 0, 00:35:21.402 "dhchap_digests": [ 00:35:21.402 "sha256", 00:35:21.402 "sha384", 00:35:21.402 "sha512" 00:35:21.402 ], 00:35:21.402 "dhchap_dhgroups": [ 00:35:21.402 "null", 00:35:21.402 "ffdhe2048", 00:35:21.402 "ffdhe3072", 00:35:21.402 "ffdhe4096", 00:35:21.402 "ffdhe6144", 00:35:21.402 "ffdhe8192" 00:35:21.402 ] 00:35:21.402 } 00:35:21.402 }, 00:35:21.402 { 00:35:21.402 "method": "bdev_nvme_attach_controller", 00:35:21.402 "params": { 00:35:21.402 "name": "nvme0", 00:35:21.402 "trtype": "TCP", 00:35:21.402 "adrfam": "IPv4", 00:35:21.402 "traddr": "127.0.0.1", 00:35:21.402 "trsvcid": "4420", 00:35:21.402 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:21.402 "prchk_reftag": false, 00:35:21.402 "prchk_guard": false, 00:35:21.402 "ctrlr_loss_timeout_sec": 0, 00:35:21.402 "reconnect_delay_sec": 0, 00:35:21.402 "fast_io_fail_timeout_sec": 0, 00:35:21.402 "psk": "key0", 00:35:21.402 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:21.402 "hdgst": false, 00:35:21.402 "ddgst": false, 00:35:21.402 "multipath": "multipath" 00:35:21.402 } 00:35:21.402 }, 00:35:21.402 { 00:35:21.402 "method": "bdev_nvme_set_hotplug", 00:35:21.402 "params": { 00:35:21.402 "period_us": 100000, 00:35:21.402 "enable": false 00:35:21.402 } 00:35:21.402 }, 00:35:21.402 { 00:35:21.402 "method": "bdev_wait_for_examine" 00:35:21.402 } 00:35:21.402 ] 00:35:21.402 }, 00:35:21.402 { 00:35:21.402 "subsystem": "nbd", 00:35:21.402 "config": [] 00:35:21.402 } 00:35:21.402 ] 00:35:21.402 }' 00:35:21.402 06:39:11 keyring_file -- keyring/file.sh@115 -- # killprocess 1258302 00:35:21.402 06:39:11 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1258302 ']' 00:35:21.402 06:39:11 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1258302 00:35:21.402 06:39:11 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:21.402 06:39:11 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:21.402 06:39:11 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1258302 00:35:21.402 06:39:11 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:21.402 06:39:11 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:21.402 06:39:11 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1258302' 00:35:21.402 killing process with pid 1258302 00:35:21.402 06:39:11 keyring_file -- common/autotest_common.sh@973 -- # kill 1258302 00:35:21.402 Received shutdown signal, test time was about 1.000000 seconds 00:35:21.402 00:35:21.402 Latency(us) 00:35:21.402 [2024-12-08T05:39:11.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.402 [2024-12-08T05:39:11.521Z] =================================================================================================================== 00:35:21.402 [2024-12-08T05:39:11.521Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:21.402 06:39:11 keyring_file -- common/autotest_common.sh@978 -- # wait 1258302 00:35:21.660 06:39:11 keyring_file -- keyring/file.sh@118 -- # bperfpid=1260508 00:35:21.660 06:39:11 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1260508 /var/tmp/bperf.sock 00:35:21.660 06:39:11 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1260508 ']' 00:35:21.660 06:39:11 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:21.661 06:39:11 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:21.661 06:39:11 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:21.661 06:39:11 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:21.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:21.661 06:39:11 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:21.661 "subsystems": [ 00:35:21.661 { 00:35:21.661 "subsystem": "keyring", 00:35:21.661 "config": [ 00:35:21.661 { 00:35:21.661 "method": "keyring_file_add_key", 00:35:21.661 "params": { 00:35:21.661 "name": "key0", 00:35:21.661 "path": "/tmp/tmp.v1ok0xTcLE" 00:35:21.661 } 00:35:21.661 }, 00:35:21.661 { 00:35:21.661 "method": "keyring_file_add_key", 00:35:21.661 "params": { 00:35:21.661 "name": "key1", 00:35:21.661 "path": "/tmp/tmp.MffuKi0gEA" 00:35:21.661 } 00:35:21.661 } 00:35:21.661 ] 00:35:21.661 }, 00:35:21.661 { 00:35:21.661 "subsystem": "iobuf", 00:35:21.661 "config": [ 00:35:21.661 { 00:35:21.661 "method": "iobuf_set_options", 00:35:21.661 "params": { 00:35:21.661 "small_pool_count": 8192, 00:35:21.661 "large_pool_count": 1024, 00:35:21.661 "small_bufsize": 8192, 00:35:21.661 "large_bufsize": 135168, 00:35:21.661 "enable_numa": false 00:35:21.661 } 00:35:21.661 } 00:35:21.661 ] 00:35:21.661 }, 00:35:21.661 { 00:35:21.661 "subsystem": "sock", 00:35:21.661 "config": [ 00:35:21.661 { 00:35:21.661 "method": "sock_set_default_impl", 00:35:21.661 "params": { 00:35:21.661 "impl_name": "posix" 00:35:21.661 } 00:35:21.661 }, 00:35:21.661 { 00:35:21.661 "method": "sock_impl_set_options", 00:35:21.661 "params": { 00:35:21.661 "impl_name": "ssl", 00:35:21.661 "recv_buf_size": 4096, 00:35:21.661 "send_buf_size": 4096, 00:35:21.661 "enable_recv_pipe": true, 00:35:21.661 "enable_quickack": false, 00:35:21.661 "enable_placement_id": 0, 00:35:21.661 "enable_zerocopy_send_server": true, 00:35:21.661 "enable_zerocopy_send_client": false, 00:35:21.661 "zerocopy_threshold": 0, 00:35:21.661 "tls_version": 0, 00:35:21.661 "enable_ktls": false 00:35:21.661 } 00:35:21.661 }, 00:35:21.661 { 00:35:21.661 "method": "sock_impl_set_options", 00:35:21.661 "params": { 00:35:21.661 "impl_name": "posix", 00:35:21.661 "recv_buf_size": 2097152, 00:35:21.661 "send_buf_size": 2097152, 00:35:21.661 "enable_recv_pipe": true, 00:35:21.661 "enable_quickack": false, 00:35:21.661 "enable_placement_id": 0, 00:35:21.661 "enable_zerocopy_send_server": true, 00:35:21.661 "enable_zerocopy_send_client": false, 00:35:21.661 "zerocopy_threshold": 0, 00:35:21.661 "tls_version": 0, 00:35:21.661 "enable_ktls": false 00:35:21.661 } 00:35:21.661 } 00:35:21.661 ] 00:35:21.661 }, 00:35:21.661 { 00:35:21.661 "subsystem": "vmd", 00:35:21.661 "config": [] 00:35:21.661 }, 00:35:21.661 { 00:35:21.661 "subsystem": "accel", 00:35:21.661 "config": [ 00:35:21.661 { 00:35:21.661 "method": "accel_set_options", 00:35:21.661 "params": { 00:35:21.661 "small_cache_size": 128, 00:35:21.661 "large_cache_size": 16, 00:35:21.661 "task_count": 2048, 00:35:21.661 "sequence_count": 2048, 00:35:21.661 "buf_count": 2048 00:35:21.661 } 00:35:21.661 } 00:35:21.661 ] 00:35:21.661 }, 00:35:21.661 { 00:35:21.661 "subsystem": "bdev", 00:35:21.661 "config": [ 00:35:21.661 { 00:35:21.661 "method": "bdev_set_options", 00:35:21.661 "params": { 00:35:21.661 "bdev_io_pool_size": 65535, 00:35:21.661 "bdev_io_cache_size": 256, 00:35:21.661 "bdev_auto_examine": true, 00:35:21.661 "iobuf_small_cache_size": 128, 00:35:21.661 "iobuf_large_cache_size": 16 00:35:21.661 } 00:35:21.661 }, 00:35:21.661 { 00:35:21.661 "method": "bdev_raid_set_options", 00:35:21.661 "params": { 00:35:21.661 "process_window_size_kb": 1024, 00:35:21.661 "process_max_bandwidth_mb_sec": 0 00:35:21.661 } 00:35:21.661 }, 00:35:21.661 { 00:35:21.661 "method": "bdev_iscsi_set_options", 00:35:21.661 "params": { 00:35:21.661 "timeout_sec": 30 00:35:21.661 } 00:35:21.661 }, 00:35:21.661 { 00:35:21.661 "method": "bdev_nvme_set_options", 00:35:21.661 "params": { 00:35:21.661 "action_on_timeout": "none", 00:35:21.661 "timeout_us": 0, 00:35:21.661 "timeout_admin_us": 0, 00:35:21.661 "keep_alive_timeout_ms": 10000, 00:35:21.661 "arbitration_burst": 0, 00:35:21.661 "low_priority_weight": 0, 00:35:21.661 "medium_priority_weight": 0, 00:35:21.661 "high_priority_weight": 0, 00:35:21.661 "nvme_adminq_poll_period_us": 10000, 00:35:21.661 "nvme_ioq_poll_period_us": 0, 00:35:21.661 "io_queue_requests": 512, 00:35:21.661 "delay_cmd_submit": true, 00:35:21.661 "transport_retry_count": 4, 00:35:21.661 "bdev_retry_count": 3, 00:35:21.661 "transport_ack_timeout": 0, 00:35:21.661 "ctrlr_loss_timeout_sec": 0, 00:35:21.661 "reconnect_delay_sec": 0, 00:35:21.661 "fast_io_fail_timeout_sec": 0, 00:35:21.661 "disable_auto_failback": false, 00:35:21.661 "generate_uuids": false, 00:35:21.661 "transport_tos": 0, 00:35:21.661 "nvme_error_stat": false, 00:35:21.661 "rdma_srq_size": 0, 00:35:21.661 "io_path_stat": false, 00:35:21.661 "allow_accel_sequence": false, 00:35:21.661 "rdma_max_cq_size": 0, 00:35:21.661 "rdma_cm_event_timeout_ms": 0, 00:35:21.661 "dhchap_digests": [ 00:35:21.661 "sha256", 00:35:21.661 "sha384", 00:35:21.661 "sha512" 00:35:21.661 ], 00:35:21.661 "dhchap_dhgroups": [ 00:35:21.661 "null", 00:35:21.661 "ffdhe2048", 00:35:21.661 "ffdhe3072", 00:35:21.661 "ffdhe4096", 00:35:21.661 "ffdhe6144", 00:35:21.661 "ffdhe8192" 00:35:21.661 ] 00:35:21.661 } 00:35:21.661 }, 00:35:21.661 { 00:35:21.661 "method": "bdev_nvme_attach_controller", 00:35:21.661 "params": { 00:35:21.661 "name": "nvme0", 00:35:21.661 "trtype": "TCP", 00:35:21.661 "adrfam": "IPv4", 00:35:21.661 "traddr": "127.0.0.1", 00:35:21.661 "trsvcid": "4420", 00:35:21.661 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:21.661 "prchk_reftag": false, 00:35:21.661 "prchk_guard": false, 00:35:21.661 "ctrlr_loss_timeout_sec": 0, 00:35:21.661 "reconnect_delay_sec": 0, 00:35:21.661 "fast_io_fail_timeout_sec": 0, 00:35:21.661 "psk": "key0", 00:35:21.661 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:21.661 "hdgst": false, 00:35:21.661 "ddgst": false, 00:35:21.661 "multipath": "multipath" 00:35:21.661 } 00:35:21.661 }, 00:35:21.661 { 00:35:21.661 "method": "bdev_nvme_set_hotplug", 00:35:21.661 "params": { 00:35:21.661 "period_us": 100000, 00:35:21.661 "enable": false 00:35:21.661 } 00:35:21.661 }, 00:35:21.661 { 00:35:21.661 "method": "bdev_wait_for_examine" 00:35:21.661 } 00:35:21.661 ] 00:35:21.661 }, 00:35:21.661 { 00:35:21.661 "subsystem": "nbd", 00:35:21.661 "config": [] 00:35:21.661 } 00:35:21.661 ] 00:35:21.661 }' 00:35:21.661 06:39:11 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:21.661 06:39:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:21.661 [2024-12-08 06:39:11.667391] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:35:21.661 [2024-12-08 06:39:11.667466] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260508 ] 00:35:21.661 [2024-12-08 06:39:11.733958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.921 [2024-12-08 06:39:11.794974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:21.921 [2024-12-08 06:39:11.987497] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:22.184 06:39:12 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:22.184 06:39:12 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:22.184 06:39:12 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:22.184 06:39:12 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:22.184 06:39:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:22.444 06:39:12 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:22.444 06:39:12 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:22.444 06:39:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:22.444 06:39:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:22.444 06:39:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:22.444 06:39:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:22.444 06:39:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:22.702 06:39:12 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:22.702 06:39:12 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:22.702 06:39:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:22.702 06:39:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:22.702 06:39:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:22.702 06:39:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:22.702 06:39:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:22.959 06:39:12 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:22.959 06:39:12 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:22.959 06:39:12 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:22.959 06:39:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:23.218 06:39:13 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:23.218 06:39:13 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:23.218 06:39:13 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.v1ok0xTcLE /tmp/tmp.MffuKi0gEA 00:35:23.218 06:39:13 keyring_file -- keyring/file.sh@20 -- # killprocess 1260508 00:35:23.218 06:39:13 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1260508 ']' 00:35:23.218 06:39:13 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1260508 00:35:23.218 06:39:13 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:23.218 06:39:13 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:23.218 06:39:13 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1260508 00:35:23.218 06:39:13 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:23.218 06:39:13 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:23.218 06:39:13 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1260508' 00:35:23.218 killing process with pid 1260508 00:35:23.218 06:39:13 keyring_file -- common/autotest_common.sh@973 -- # kill 1260508 00:35:23.218 Received shutdown signal, test time was about 1.000000 seconds 00:35:23.218 00:35:23.218 Latency(us) 00:35:23.218 [2024-12-08T05:39:13.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.218 [2024-12-08T05:39:13.337Z] =================================================================================================================== 00:35:23.218 [2024-12-08T05:39:13.337Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:23.218 06:39:13 keyring_file -- common/autotest_common.sh@978 -- # wait 1260508 00:35:23.476 06:39:13 keyring_file -- keyring/file.sh@21 -- # killprocess 1258294 00:35:23.476 06:39:13 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1258294 ']' 00:35:23.476 06:39:13 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1258294 00:35:23.476 06:39:13 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:23.476 06:39:13 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:23.476 06:39:13 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1258294 00:35:23.476 06:39:13 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:23.476 06:39:13 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:23.477 06:39:13 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1258294' 00:35:23.477 killing process with pid 1258294 00:35:23.477 06:39:13 keyring_file -- common/autotest_common.sh@973 -- # kill 1258294 00:35:23.477 06:39:13 keyring_file -- common/autotest_common.sh@978 -- # wait 1258294 00:35:24.045 00:35:24.045 real 0m14.938s 00:35:24.045 user 0m38.066s 00:35:24.045 sys 0m3.359s 00:35:24.045 06:39:13 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:24.045 06:39:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:24.045 ************************************ 00:35:24.045 END TEST keyring_file 00:35:24.045 ************************************ 00:35:24.045 06:39:13 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:24.045 06:39:13 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:24.045 06:39:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:24.045 06:39:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:24.045 06:39:13 -- common/autotest_common.sh@10 -- # set +x 00:35:24.045 ************************************ 00:35:24.045 START TEST keyring_linux 00:35:24.045 ************************************ 00:35:24.045 06:39:13 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:24.045 Joined session keyring: 642495775 00:35:24.045 * Looking for test storage... 00:35:24.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:24.045 06:39:14 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:24.045 06:39:14 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:35:24.045 06:39:14 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:24.045 06:39:14 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:24.045 06:39:14 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:24.045 06:39:14 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:24.045 06:39:14 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:24.045 06:39:14 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:24.045 06:39:14 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:24.045 06:39:14 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:24.045 06:39:14 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:24.045 06:39:14 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:24.045 06:39:14 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:24.045 06:39:14 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:24.045 06:39:14 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:24.045 06:39:14 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:24.046 06:39:14 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:24.046 06:39:14 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:24.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.046 --rc genhtml_branch_coverage=1 00:35:24.046 --rc genhtml_function_coverage=1 00:35:24.046 --rc genhtml_legend=1 00:35:24.046 --rc geninfo_all_blocks=1 00:35:24.046 --rc geninfo_unexecuted_blocks=1 00:35:24.046 00:35:24.046 ' 00:35:24.046 06:39:14 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:24.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.046 --rc genhtml_branch_coverage=1 00:35:24.046 --rc genhtml_function_coverage=1 00:35:24.046 --rc genhtml_legend=1 00:35:24.046 --rc geninfo_all_blocks=1 00:35:24.046 --rc geninfo_unexecuted_blocks=1 00:35:24.046 00:35:24.046 ' 00:35:24.046 06:39:14 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:24.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.046 --rc genhtml_branch_coverage=1 00:35:24.046 --rc genhtml_function_coverage=1 00:35:24.046 --rc genhtml_legend=1 00:35:24.046 --rc geninfo_all_blocks=1 00:35:24.046 --rc geninfo_unexecuted_blocks=1 00:35:24.046 00:35:24.046 ' 00:35:24.046 06:39:14 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:24.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.046 --rc genhtml_branch_coverage=1 00:35:24.046 --rc genhtml_function_coverage=1 00:35:24.046 --rc genhtml_legend=1 00:35:24.046 --rc geninfo_all_blocks=1 00:35:24.046 --rc geninfo_unexecuted_blocks=1 00:35:24.046 00:35:24.046 ' 00:35:24.046 06:39:14 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:24.046 06:39:14 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:24.046 06:39:14 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:24.046 06:39:14 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.046 06:39:14 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.046 06:39:14 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.046 06:39:14 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:24.046 06:39:14 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:24.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:24.046 06:39:14 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:24.046 06:39:14 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:24.046 06:39:14 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:24.046 06:39:14 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:24.046 06:39:14 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:24.046 06:39:14 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:24.046 06:39:14 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:24.046 06:39:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:24.046 06:39:14 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:24.046 06:39:14 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:24.046 06:39:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:24.046 06:39:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:24.046 06:39:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:24.046 06:39:14 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:24.305 06:39:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:24.305 06:39:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:24.305 /tmp/:spdk-test:key0 00:35:24.305 06:39:14 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:24.305 06:39:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:24.305 06:39:14 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:24.305 06:39:14 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:24.305 06:39:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:24.305 06:39:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:24.305 06:39:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:24.305 06:39:14 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:24.305 06:39:14 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:24.305 06:39:14 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:24.305 06:39:14 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:24.305 06:39:14 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:24.305 06:39:14 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:24.305 06:39:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:24.305 06:39:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:24.305 /tmp/:spdk-test:key1 00:35:24.305 06:39:14 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1260874 00:35:24.305 06:39:14 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:24.305 06:39:14 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1260874 00:35:24.305 06:39:14 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1260874 ']' 00:35:24.305 06:39:14 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:24.305 06:39:14 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:24.305 06:39:14 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:24.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:24.305 06:39:14 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:24.305 06:39:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:24.305 [2024-12-08 06:39:14.284360] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:35:24.305 [2024-12-08 06:39:14.284435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260874 ] 00:35:24.305 [2024-12-08 06:39:14.349392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.305 [2024-12-08 06:39:14.406050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:24.563 06:39:14 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:24.563 06:39:14 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:24.563 06:39:14 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:24.563 06:39:14 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.563 06:39:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:24.563 [2024-12-08 06:39:14.657183] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:24.563 null0 00:35:24.822 [2024-12-08 06:39:14.689225] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:24.822 [2024-12-08 06:39:14.689777] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:24.822 06:39:14 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.822 06:39:14 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:24.822 354804316 00:35:24.822 06:39:14 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:24.822 574365859 00:35:24.822 06:39:14 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1260969 00:35:24.822 06:39:14 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:24.822 06:39:14 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1260969 /var/tmp/bperf.sock 00:35:24.822 06:39:14 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1260969 ']' 00:35:24.822 06:39:14 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:24.822 06:39:14 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:24.822 06:39:14 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:24.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:24.822 06:39:14 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:24.822 06:39:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:24.822 [2024-12-08 06:39:14.758011] Starting SPDK v25.01-pre git sha1 c0f3f2d18 / DPDK 24.03.0 initialization... 00:35:24.822 [2024-12-08 06:39:14.758126] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260969 ] 00:35:24.822 [2024-12-08 06:39:14.823225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.822 [2024-12-08 06:39:14.879446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:25.080 06:39:14 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:25.080 06:39:14 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:25.080 06:39:14 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:25.080 06:39:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:25.337 06:39:15 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:25.337 06:39:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:25.594 06:39:15 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:25.594 06:39:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:25.852 [2024-12-08 06:39:15.878386] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:25.852 nvme0n1 00:35:25.852 06:39:15 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:25.852 06:39:15 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:25.852 06:39:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:25.852 06:39:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:25.852 06:39:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:25.852 06:39:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:26.417 06:39:16 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:26.417 06:39:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:26.417 06:39:16 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:26.417 06:39:16 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:26.417 06:39:16 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:26.417 06:39:16 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:26.417 06:39:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:26.417 06:39:16 keyring_linux -- keyring/linux.sh@25 -- # sn=354804316 00:35:26.417 06:39:16 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:26.417 06:39:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:26.417 06:39:16 keyring_linux -- keyring/linux.sh@26 -- # [[ 354804316 == \3\5\4\8\0\4\3\1\6 ]] 00:35:26.417 06:39:16 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 354804316 00:35:26.417 06:39:16 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:26.417 06:39:16 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:26.675 Running I/O for 1 seconds... 00:35:27.611 11599.00 IOPS, 45.31 MiB/s 00:35:27.611 Latency(us) 00:35:27.611 [2024-12-08T05:39:17.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:27.611 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:27.611 nvme0n1 : 1.01 11605.14 45.33 0.00 0.00 10963.33 3106.89 14175.19 00:35:27.611 [2024-12-08T05:39:17.730Z] =================================================================================================================== 00:35:27.611 [2024-12-08T05:39:17.730Z] Total : 11605.14 45.33 0.00 0.00 10963.33 3106.89 14175.19 00:35:27.611 { 00:35:27.611 "results": [ 00:35:27.611 { 00:35:27.611 "job": "nvme0n1", 00:35:27.611 "core_mask": "0x2", 00:35:27.611 "workload": "randread", 00:35:27.611 "status": "finished", 00:35:27.611 "queue_depth": 128, 00:35:27.611 "io_size": 4096, 00:35:27.611 "runtime": 1.010587, 00:35:27.611 "iops": 11605.136420713901, 00:35:27.611 "mibps": 45.33256414341368, 00:35:27.611 "io_failed": 0, 00:35:27.611 "io_timeout": 0, 00:35:27.611 "avg_latency_us": 10963.328640291042, 00:35:27.611 "min_latency_us": 3106.8918518518517, 00:35:27.611 "max_latency_us": 14175.194074074074 00:35:27.611 } 00:35:27.611 ], 00:35:27.611 "core_count": 1 00:35:27.611 } 00:35:27.611 06:39:17 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:27.611 06:39:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:27.870 06:39:17 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:27.870 06:39:17 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:27.870 06:39:17 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:27.870 06:39:17 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:27.870 06:39:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:27.870 06:39:17 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:28.129 06:39:18 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:28.129 06:39:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:28.129 06:39:18 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:28.129 06:39:18 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:28.129 06:39:18 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:28.129 06:39:18 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:28.129 06:39:18 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:28.129 06:39:18 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:28.129 06:39:18 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:28.129 06:39:18 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:28.129 06:39:18 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:28.129 06:39:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:28.388 [2024-12-08 06:39:18.473450] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:28.388 [2024-12-08 06:39:18.473712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbb560 (107): Transport endpoint is not connected 00:35:28.388 [2024-12-08 06:39:18.474691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbb560 (9): Bad file descriptor 00:35:28.388 [2024-12-08 06:39:18.475691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:28.388 [2024-12-08 06:39:18.475730] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:28.388 [2024-12-08 06:39:18.475747] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:28.388 [2024-12-08 06:39:18.475764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:28.388 request: 00:35:28.388 { 00:35:28.388 "name": "nvme0", 00:35:28.388 "trtype": "tcp", 00:35:28.388 "traddr": "127.0.0.1", 00:35:28.388 "adrfam": "ipv4", 00:35:28.388 "trsvcid": "4420", 00:35:28.388 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:28.388 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:28.388 "prchk_reftag": false, 00:35:28.388 "prchk_guard": false, 00:35:28.388 "hdgst": false, 00:35:28.388 "ddgst": false, 00:35:28.388 "psk": ":spdk-test:key1", 00:35:28.388 "allow_unrecognized_csi": false, 00:35:28.388 "method": "bdev_nvme_attach_controller", 00:35:28.388 "req_id": 1 00:35:28.388 } 00:35:28.388 Got JSON-RPC error response 00:35:28.388 response: 00:35:28.388 { 00:35:28.388 "code": -5, 00:35:28.388 "message": "Input/output error" 00:35:28.388 } 00:35:28.388 06:39:18 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:28.388 06:39:18 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:28.388 06:39:18 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:28.388 06:39:18 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:28.388 06:39:18 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:28.388 06:39:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:28.388 06:39:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:28.388 06:39:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:28.388 06:39:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:28.388 06:39:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:28.388 06:39:18 keyring_linux -- keyring/linux.sh@33 -- # sn=354804316 00:35:28.388 06:39:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 354804316 00:35:28.388 1 links removed 00:35:28.388 06:39:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:28.388 06:39:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:28.388 06:39:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:28.388 06:39:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:28.388 06:39:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:28.388 06:39:18 keyring_linux -- keyring/linux.sh@33 -- # sn=574365859 00:35:28.388 06:39:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 574365859 00:35:28.388 1 links removed 00:35:28.388 06:39:18 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1260969 00:35:28.388 06:39:18 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1260969 ']' 00:35:28.388 06:39:18 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1260969 00:35:28.388 06:39:18 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:28.647 06:39:18 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:28.647 06:39:18 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1260969 00:35:28.647 06:39:18 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:28.647 06:39:18 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:28.647 06:39:18 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1260969' 00:35:28.647 killing process with pid 1260969 00:35:28.647 06:39:18 keyring_linux -- common/autotest_common.sh@973 -- # kill 1260969 00:35:28.647 Received shutdown signal, test time was about 1.000000 seconds 00:35:28.647 00:35:28.647 Latency(us) 00:35:28.647 [2024-12-08T05:39:18.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.647 [2024-12-08T05:39:18.766Z] =================================================================================================================== 00:35:28.647 [2024-12-08T05:39:18.766Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:28.647 06:39:18 keyring_linux -- common/autotest_common.sh@978 -- # wait 1260969 00:35:28.647 06:39:18 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1260874 00:35:28.647 06:39:18 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1260874 ']' 00:35:28.647 06:39:18 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1260874 00:35:28.647 06:39:18 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:28.647 06:39:18 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:28.647 06:39:18 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1260874 00:35:28.906 06:39:18 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:28.906 06:39:18 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:28.906 06:39:18 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1260874' 00:35:28.906 killing process with pid 1260874 00:35:28.906 06:39:18 keyring_linux -- common/autotest_common.sh@973 -- # kill 1260874 00:35:28.906 06:39:18 keyring_linux -- common/autotest_common.sh@978 -- # wait 1260874 00:35:29.164 00:35:29.164 real 0m5.172s 00:35:29.164 user 0m10.338s 00:35:29.164 sys 0m1.592s 00:35:29.164 06:39:19 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:29.164 06:39:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:29.164 ************************************ 00:35:29.164 END TEST keyring_linux 00:35:29.164 ************************************ 00:35:29.164 06:39:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:29.164 06:39:19 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:29.164 06:39:19 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:29.164 06:39:19 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:29.164 06:39:19 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:29.164 06:39:19 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:29.164 06:39:19 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:29.164 06:39:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:29.164 06:39:19 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:29.164 06:39:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:29.164 06:39:19 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:29.164 06:39:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:29.164 06:39:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:29.164 06:39:19 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:29.164 06:39:19 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:29.164 06:39:19 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:29.164 06:39:19 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:29.164 06:39:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:29.164 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:35:29.164 06:39:19 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:29.164 06:39:19 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:29.164 06:39:19 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:29.164 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:35:31.067 INFO: APP EXITING 00:35:31.067 INFO: killing all VMs 00:35:31.067 INFO: killing vhost app 00:35:31.067 INFO: EXIT DONE 00:35:32.446 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:35:32.446 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:35:32.446 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:35:32.446 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:35:32.446 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:35:32.446 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:35:32.446 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:35:32.446 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:35:32.446 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:35:32.446 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:35:32.446 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:35:32.446 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:35:32.446 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:35:32.446 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:35:32.446 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:35:32.446 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:35:32.446 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:35:33.822 Cleaning 00:35:33.822 Removing: /var/run/dpdk/spdk0/config 00:35:33.822 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:33.822 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:33.822 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:33.822 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:33.822 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:33.822 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:33.822 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:33.822 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:33.822 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:33.822 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:33.822 Removing: /var/run/dpdk/spdk1/config 00:35:33.822 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:33.822 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:33.822 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:33.822 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:33.822 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:33.822 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:33.822 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:33.822 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:33.822 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:33.822 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:33.822 Removing: /var/run/dpdk/spdk2/config 00:35:33.822 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:33.822 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:33.822 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:33.822 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:33.822 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:33.822 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:33.822 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:33.822 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:33.822 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:33.822 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:33.822 Removing: /var/run/dpdk/spdk3/config 00:35:33.822 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:33.822 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:33.822 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:33.822 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:33.822 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:33.822 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:33.822 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:33.822 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:33.822 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:33.822 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:33.822 Removing: /var/run/dpdk/spdk4/config 00:35:33.822 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:33.822 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:33.822 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:33.822 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:33.822 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:33.822 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:33.822 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:33.822 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:33.822 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:33.822 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:33.822 Removing: /dev/shm/bdev_svc_trace.1 00:35:33.822 Removing: /dev/shm/nvmf_trace.0 00:35:33.822 Removing: /dev/shm/spdk_tgt_trace.pid937988 00:35:33.822 Removing: /var/run/dpdk/spdk0 00:35:33.822 Removing: /var/run/dpdk/spdk1 00:35:33.822 Removing: /var/run/dpdk/spdk2 00:35:33.822 Removing: /var/run/dpdk/spdk3 00:35:33.822 Removing: /var/run/dpdk/spdk4 00:35:33.822 Removing: /var/run/dpdk/spdk_pid1000731 00:35:33.822 Removing: /var/run/dpdk/spdk_pid1027847 00:35:33.822 Removing: /var/run/dpdk/spdk_pid1031162 00:35:33.822 Removing: /var/run/dpdk/spdk_pid1035650 00:35:33.822 Removing: /var/run/dpdk/spdk_pid1039945 00:35:33.822 Removing: /var/run/dpdk/spdk_pid1040020 00:35:33.822 Removing: /var/run/dpdk/spdk_pid1040603 00:35:33.822 Removing: /var/run/dpdk/spdk_pid1041251 00:35:33.822 Removing: /var/run/dpdk/spdk_pid1041873 00:35:33.822 Removing: /var/run/dpdk/spdk_pid1042288 00:35:33.822 Removing: /var/run/dpdk/spdk_pid1042319 00:35:33.822 Removing: /var/run/dpdk/spdk_pid1042463 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1042597 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1042604 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1043256 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1043908 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1044456 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1044857 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1044975 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1045118 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1046018 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1046798 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1052218 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1080226 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1083166 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1084462 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1086291 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1086432 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1086572 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1086715 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1087166 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1088481 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1089339 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1089771 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1091372 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1091687 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1092250 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1094658 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1098015 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1098017 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1098019 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1100204 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1105082 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1107728 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1111643 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1112598 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1113684 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1114650 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1117588 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1120650 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1122994 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1127294 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1127298 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1130098 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1130354 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1130493 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1130756 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1130767 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1133555 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1133948 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1136583 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1138556 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1141999 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1145478 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1152258 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1157250 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1157289 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1170193 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1170715 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1171127 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1171602 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1172200 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1172645 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1173057 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1173461 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1175987 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1176176 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1179947 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1180128 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1183504 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1186014 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1193574 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1193973 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1196510 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1196668 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1199305 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1203020 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1205056 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1211465 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1216706 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1217885 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1218547 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1229388 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1231652 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1233655 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1238600 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1238724 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1241648 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1243055 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1244455 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1245313 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1246609 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1247478 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1252832 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1253196 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1253585 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1255148 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1255548 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1255827 00:35:34.081 Removing: /var/run/dpdk/spdk_pid1258294 00:35:34.340 Removing: /var/run/dpdk/spdk_pid1258302 00:35:34.340 Removing: /var/run/dpdk/spdk_pid1260508 00:35:34.340 Removing: /var/run/dpdk/spdk_pid1260874 00:35:34.340 Removing: /var/run/dpdk/spdk_pid1260969 00:35:34.340 Removing: /var/run/dpdk/spdk_pid936305 00:35:34.340 Removing: /var/run/dpdk/spdk_pid937051 00:35:34.340 Removing: /var/run/dpdk/spdk_pid937988 00:35:34.340 Removing: /var/run/dpdk/spdk_pid938316 00:35:34.340 Removing: /var/run/dpdk/spdk_pid939003 00:35:34.340 Removing: /var/run/dpdk/spdk_pid939142 00:35:34.340 Removing: /var/run/dpdk/spdk_pid939863 00:35:34.340 Removing: /var/run/dpdk/spdk_pid939987 00:35:34.340 Removing: /var/run/dpdk/spdk_pid940245 00:35:34.340 Removing: /var/run/dpdk/spdk_pid941455 00:35:34.340 Removing: /var/run/dpdk/spdk_pid942380 00:35:34.340 Removing: /var/run/dpdk/spdk_pid942691 00:35:34.340 Removing: /var/run/dpdk/spdk_pid942892 00:35:34.340 Removing: /var/run/dpdk/spdk_pid943169 00:35:34.340 Removing: /var/run/dpdk/spdk_pid943418 00:35:34.340 Removing: /var/run/dpdk/spdk_pid943583 00:35:34.340 Removing: /var/run/dpdk/spdk_pid943738 00:35:34.340 Removing: /var/run/dpdk/spdk_pid943924 00:35:34.340 Removing: /var/run/dpdk/spdk_pid944235 00:35:34.340 Removing: /var/run/dpdk/spdk_pid946728 00:35:34.340 Removing: /var/run/dpdk/spdk_pid946890 00:35:34.340 Removing: /var/run/dpdk/spdk_pid947052 00:35:34.340 Removing: /var/run/dpdk/spdk_pid947071 00:35:34.340 Removing: /var/run/dpdk/spdk_pid947486 00:35:34.340 Removing: /var/run/dpdk/spdk_pid947502 00:35:34.340 Removing: /var/run/dpdk/spdk_pid947893 00:35:34.340 Removing: /var/run/dpdk/spdk_pid947934 00:35:34.340 Removing: /var/run/dpdk/spdk_pid948109 00:35:34.340 Removing: /var/run/dpdk/spdk_pid948229 00:35:34.340 Removing: /var/run/dpdk/spdk_pid948395 00:35:34.340 Removing: /var/run/dpdk/spdk_pid948412 00:35:34.340 Removing: /var/run/dpdk/spdk_pid948908 00:35:34.340 Removing: /var/run/dpdk/spdk_pid949063 00:35:34.340 Removing: /var/run/dpdk/spdk_pid949264 00:35:34.340 Removing: /var/run/dpdk/spdk_pid951485 00:35:34.340 Removing: /var/run/dpdk/spdk_pid954054 00:35:34.340 Removing: /var/run/dpdk/spdk_pid961813 00:35:34.340 Removing: /var/run/dpdk/spdk_pid962307 00:35:34.340 Removing: /var/run/dpdk/spdk_pid964760 00:35:34.340 Removing: /var/run/dpdk/spdk_pid965040 00:35:34.340 Removing: /var/run/dpdk/spdk_pid967574 00:35:34.340 Removing: /var/run/dpdk/spdk_pid971449 00:35:34.340 Removing: /var/run/dpdk/spdk_pid973524 00:35:34.340 Removing: /var/run/dpdk/spdk_pid979977 00:35:34.340 Removing: /var/run/dpdk/spdk_pid985250 00:35:34.340 Removing: /var/run/dpdk/spdk_pid986569 00:35:34.340 Removing: /var/run/dpdk/spdk_pid987242 00:35:34.340 Removing: /var/run/dpdk/spdk_pid998292 00:35:34.340 Clean 00:35:34.340 06:39:24 -- common/autotest_common.sh@1453 -- # return 0 00:35:34.340 06:39:24 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:34.340 06:39:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:34.340 06:39:24 -- common/autotest_common.sh@10 -- # set +x 00:35:34.340 06:39:24 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:34.340 06:39:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:34.340 06:39:24 -- common/autotest_common.sh@10 -- # set +x 00:35:34.340 06:39:24 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:34.340 06:39:24 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:34.340 06:39:24 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:34.600 06:39:24 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:34.600 06:39:24 -- spdk/autotest.sh@398 -- # hostname 00:35:34.600 06:39:24 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:34.600 geninfo: WARNING: invalid characters removed from testname! 00:36:06.719 06:39:55 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:10.014 06:39:59 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:13.294 06:40:02 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:15.833 06:40:05 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:19.130 06:40:08 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:22.428 06:40:11 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:24.968 06:40:14 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:24.968 06:40:14 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:24.968 06:40:14 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:24.968 06:40:14 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:24.968 06:40:14 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:24.968 06:40:14 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:24.968 + [[ -n 865715 ]] 00:36:24.968 + sudo kill 865715 00:36:24.978 [Pipeline] } 00:36:24.990 [Pipeline] // stage 00:36:24.995 [Pipeline] } 00:36:25.008 [Pipeline] // timeout 00:36:25.013 [Pipeline] } 00:36:25.026 [Pipeline] // catchError 00:36:25.030 [Pipeline] } 00:36:25.044 [Pipeline] // wrap 00:36:25.050 [Pipeline] } 00:36:25.062 [Pipeline] // catchError 00:36:25.070 [Pipeline] stage 00:36:25.072 [Pipeline] { (Epilogue) 00:36:25.084 [Pipeline] catchError 00:36:25.087 [Pipeline] { 00:36:25.099 [Pipeline] echo 00:36:25.100 Cleanup processes 00:36:25.106 [Pipeline] sh 00:36:25.403 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:25.403 1271530 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:25.416 [Pipeline] sh 00:36:25.702 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:25.702 ++ grep -v 'sudo pgrep' 00:36:25.702 ++ awk '{print $1}' 00:36:25.702 + sudo kill -9 00:36:25.702 + true 00:36:25.715 [Pipeline] sh 00:36:26.002 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:35.999 [Pipeline] sh 00:36:36.288 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:36.288 Artifacts sizes are good 00:36:36.304 [Pipeline] archiveArtifacts 00:36:36.312 Archiving artifacts 00:36:36.498 [Pipeline] sh 00:36:36.802 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:36.818 [Pipeline] cleanWs 00:36:36.829 [WS-CLEANUP] Deleting project workspace... 00:36:36.829 [WS-CLEANUP] Deferred wipeout is used... 00:36:36.836 [WS-CLEANUP] done 00:36:36.838 [Pipeline] } 00:36:36.855 [Pipeline] // catchError 00:36:36.868 [Pipeline] sh 00:36:37.158 + logger -p user.info -t JENKINS-CI 00:36:37.166 [Pipeline] } 00:36:37.180 [Pipeline] // stage 00:36:37.186 [Pipeline] } 00:36:37.201 [Pipeline] // node 00:36:37.206 [Pipeline] End of Pipeline 00:36:37.243 Finished: SUCCESS